`Mixture`

Inherits From: `RandomVariable`

Mixture distribution.

The `Mixture`

object implements batched mixture distributions. The mixture model is defined by a `Categorical`

distribution (the mixture) and a python list of `Distribution`

objects.

Methods supported include `log_prob`

, `prob`

, `mean`

, `sample`

, and `entropy_lower_bound`

.

`allow_nan_stats`

Python `bool`

describing behavior when a stat is undefined.

Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined.

: Python`allow_nan_stats`

`bool`

.

`batch_shape`

Shape of a single sample from a single event index as a `TensorShape`

.

May be partially defined or unknown.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

:`batch_shape`

`TensorShape`

, possibly unknown.

`cat`

`components`

`dtype`

The `DType`

of `Tensor`

s handled by this `Distribution`

.

`event_shape`

Shape of a single sample from a single batch as a `TensorShape`

.

May be partially defined or unknown.

:`event_shape`

`TensorShape`

, possibly unknown.

`name`

Name prepended to all ops created by this `Distribution`

.

`num_components`

`parameters`

Dictionary of parameters used to instantiate this `Distribution`

.

`reparameterization_type`

Describes how samples from the distribution are reparameterized.

Currently this is one of the static instances `distributions.FULLY_REPARAMETERIZED`

or `distributions.NOT_REPARAMETERIZED`

.

An instance of `ReparameterizationType`

.

`sample_shape`

Sample shape of random variable.

`shape`

Shape of random variable.

`unique_name`

Name of random variable with its unique scoping name. Use `name`

to just get the name of the random variable.

`validate_args`

Python `bool`

indicating possibly expensive checks are enabled.

**init**

```
__init__(
*args,
**kwargs
)
```

Initialize a Mixture distribution.

A `Mixture`

is defined by a `Categorical`

(`cat`

, representing the mixture probabilities) and a list of `Distribution`

objects all having matching dtype, batch shape, event shape, and continuity properties (the components).

The `num_classes`

of `cat`

must be possible to infer at graph construction time and match `len(components)`

.

: A`cat`

`Categorical`

distribution instance, representing the probabilities of`distributions`

.: A list or tuple of`components`

`Distribution`

instances. Each instance must have the same type, be defined on the same domain, and have matching`event_shape`

and`batch_shape`

.: Python`validate_args`

`bool`

, default`False`

. If`True`

, raise a runtime error if batch or event ranks are inconsistent between cat and any of the distributions. This is only checked if the ranks cannot be determined statically at graph construction time.: Boolean, default`allow_nan_stats`

`True`

. If`False`

, raise an exception if a statistic (e.g. mean/mode/etc...) is undefined for any batch member. If`True`

, batch members with valid parameters leading to undefined statistics will return NaN for this statistic.: A name for this distribution (optional).`name`

: If cat is not a`TypeError`

`Categorical`

, or`components`

is not a list or tuple, or the elements of`components`

are not instances of`Distribution`

, or do not have matching`dtype`

.: If`ValueError`

`components`

is an empty list or tuple, or its elements do not have a statically known event rank. If`cat.num_classes`

cannot be inferred at graph creation time, or the constant value of`cat.num_classes`

is not equal to`len(components)`

, or all`components`

and`cat`

do not have matching static batch shapes, or all components do not have matching static event shapes.

**abs**

`__abs__()`

**add**

`__add__(other)`

**and**

`__and__(other)`

**bool**

`__bool__()`

**div**

`__div__(other)`

**eq**

`__eq__(other)`

**floordiv**

`__floordiv__(other)`

**ge**

`__ge__(other)`

**getitem**

`__getitem__(key)`

Subset the tensor associated to the random variable, not the random variable itself.

**gt**

`__gt__(other)`

**invert**

`__invert__()`

**iter**

`__iter__()`

**le**

`__le__(other)`

**lt**

`__lt__(other)`

**mod**

`__mod__(other)`

**mul**

`__mul__(other)`

**neg**

`__neg__()`

**nonzero**

`__nonzero__()`

**or**

`__or__(other)`

**pow**

`__pow__(other)`

**radd**

`__radd__(other)`

**rand**

`__rand__(other)`

**rdiv**

`__rdiv__(other)`

**rfloordiv**

`__rfloordiv__(other)`

**rmod**

`__rmod__(other)`

**rmul**

`__rmul__(other)`

**ror**

`__ror__(other)`

**rpow**

`__rpow__(other)`

**rsub**

`__rsub__(other)`

**rtruediv**

`__rtruediv__(other)`

**rxor**

`__rxor__(other)`

**sub**

`__sub__(other)`

**truediv**

`__truediv__(other)`

**xor**

`__xor__(other)`

`batch_shape_tensor`

`batch_shape_tensor(name='batch_shape_tensor')`

Shape of a single sample from a single event index as a 1-D `Tensor`

.

The batch dimensions are indexes into independent, non-identical parameterizations of this distribution.

: name to give to the op`name`

:`batch_shape`

`Tensor`

.

`cdf`

```
cdf(
value,
name='cdf'
)
```

Cumulative distribution function.

Given random variable `X`

, the cumulative distribution function `cdf`

is:

`cdf(x) := P[X <= x]`

:`value`

`float`

or`double`

`Tensor`

.: The name to give this op.`name`

: a`cdf`

`Tensor`

of shape`sample_shape(x) + self.batch_shape`

with values of type`self.dtype`

.

`copy`

`copy(**override_parameters_kwargs)`

Creates a deep copy of the distribution.

Note: the copy distribution may continue to depend on the original initialization arguments.

**override_parameters_kwargs: String/value dictionary of initialization arguments to override with new values.

: A new instance of`distribution`

`type(self)`

initialized from the union of self.parameters and override_parameters_kwargs, i.e.,`dict(self.parameters, **override_parameters_kwargs)`

.

`covariance`

`covariance(name='covariance')`

Covariance.

Covariance is (possibly) defined only for non-scalar-event distributions.

For example, for a length-`k`

, vector-valued distribution, it is calculated as,

`Cov[i, j] = Covariance(X_i, X_j) = E[(X_i - E[X_i]) (X_j - E[X_j])]`

where `Cov`

is a (batch of) `k x k`

matrix, `0 <= (i, j) < k`

, and `E`

denotes expectation.

Alternatively, for non-vector, multivariate distributions (e.g., matrix-valued, Wishart), `Covariance`

shall return a (batch of) matrices under some vectorization of the events, i.e.,

`Cov[i, j] = Covariance(Vec(X)_i, Vec(X)_j) = [as above]`

where `Cov`

is a (batch of) `k' x k'`

matrices, `0 <= (i, j) < k' = reduce_prod(event_shape)`

, and `Vec`

is some function mapping indices of this distribution's event dimensions to indices of a length-`k'`

vector.

: The name to give this op.`name`

: Floating-point`covariance`

`Tensor`

with shape`[B1, ..., Bn, k', k']`

where the first`n`

dimensions are batch coordinates and`k' = reduce_prod(self.event_shape)`

.

`entropy`

`entropy(name='entropy')`

Shannon entropy in nats.

`entropy_lower_bound`

`entropy_lower_bound(name='entropy_lower_bound')`

A lower bound on the entropy of this mixture model.

The bound below is not always very tight, and its usefulness depends on the mixture probabilities and the components in use.

A lower bound is useful for ELBO when the `Mixture`

is the variational distribution:

\( p(x) >= ELBO = q(z) p(x, z) dz + H[q] \)

where \( p \) is the prior distribution, \( q \) is the variational, and \( H[q] \) is the entropy of \( q \). If there is a lower bound \( G[q] \) such that \( H[q] G[q] \) then it can be used in place of \( H[q] \).

For a mixture of distributions \( q(Z) = _i c_i q_i(Z) \) with \( _i c_i = 1 \), by the concavity of \( f(x) = -x x \), a simple lower bound is:

\( \[\begin{align} H[q] & = - \int q(z) \log q(z) dz \\\ & = - \int (\sum_i c_i q_i(z)) \log(\sum_i c_i q_i(z)) dz \\\ & \geq - \sum_i c_i \int q_i(z) \log q_i(z) dz \\\ & = \sum_i c_i H[q_i] \end{align}\]\)

This is the term we calculate below for \( G[q] \).

: A name for this operation (optional).`name`

A lower bound on the Mixture's entropy.

`eval`

```
eval(
session=None,
feed_dict=None
)
```

In a session, computes and returns the value of this random variable.

This is not a graph construction method, it does not add ops to the graph.

This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used.

: tf.BaseSession, optional. The`session`

`tf.Session`

to use to evaluate this random variable. If none, the default session is used.: dict, optional. A dictionary that maps`feed_dict`

`tf.Tensor`

objects to feed values. See`tf.Session.run()`

for a description of the valid feed values.

```
x = Normal(0.0, 1.0)
with tf.Session() as sess:
# Usage passing the session explicitly.
print(x.eval(sess))
# Usage with the default session. The 'with' block
# above makes 'sess' the default session.
print(x.eval())
```

`event_shape_tensor`

`event_shape_tensor(name='event_shape_tensor')`

Shape of a single sample from a single batch as a 1-D int32 `Tensor`

.

: name to give to the op`name`

:`event_shape`

`Tensor`

.

`get_ancestors`

`get_ancestors(collection=None)`

Get ancestor random variables.

`get_blanket`

`get_blanket(collection=None)`

Get the random variable's Markov blanket.

`get_children`

`get_children(collection=None)`

Get child random variables.

`get_descendants`

`get_descendants(collection=None)`

Get descendant random variables.

`get_parents`

`get_parents(collection=None)`

Get parent random variables.

`get_shape`

`get_shape()`

Get shape of random variable.

`get_siblings`

`get_siblings(collection=None)`

Get sibling random variables.

`get_variables`

`get_variables(collection=None)`

Get TensorFlow variables that the random variable depends on.

`is_scalar_batch`

`is_scalar_batch(name='is_scalar_batch')`

Indicates that `batch_shape == []`

.

: The name to give this op.`name`

:`is_scalar_batch`

`bool`

scalar`Tensor`

.

`is_scalar_event`

`is_scalar_event(name='is_scalar_event')`

Indicates that `event_shape == []`

.

: The name to give this op.`name`

:`is_scalar_event`

`bool`

scalar`Tensor`

.

`log_cdf`

```
log_cdf(
value,
name='log_cdf'
)
```

Log cumulative distribution function.

Given random variable `X`

, the cumulative distribution function `cdf`

is:

`log_cdf(x) := Log[ P[X <= x] ]`

Often, a numerical approximation can be used for `log_cdf(x)`

that yields a more accurate answer than simply taking the logarithm of the `cdf`

when `x << -1`

.

:`value`

`float`

or`double`

`Tensor`

.: The name to give this op.`name`

: a`logcdf`

`Tensor`

of shape`sample_shape(x) + self.batch_shape`

with values of type`self.dtype`

.

`log_prob`

```
log_prob(
value,
name='log_prob'
)
```

Log probability density/mass function.

:`value`

`float`

or`double`

`Tensor`

.: The name to give this op.`name`

: a`log_prob`

`Tensor`

of shape`sample_shape(x) + self.batch_shape`

with values of type`self.dtype`

.

`log_survival_function`

```
log_survival_function(
value,
name='log_survival_function'
)
```

Log survival function.

Given random variable `X`

, the survival function is defined:

```
log_survival_function(x) = Log[ P[X > x] ]
= Log[ 1 - P[X <= x] ]
= Log[ 1 - cdf(x) ]
```

Typically, different numerical approximations can be used for the log survival function, which are more accurate than `1 - cdf(x)`

when `x >> 1`

.

:`value`

`float`

or`double`

`Tensor`

.: The name to give this op.`name`

`Tensor`

of shape `sample_shape(x) + self.batch_shape`

with values of type `self.dtype`

.

`mean`

`mean(name='mean')`

Mean.

`mode`

`mode(name='mode')`

Mode.

`param_shapes`

```
param_shapes(
cls,
sample_shape,
name='DistributionParamShapes'
)
```

Shapes of parameters given the desired shape of a call to `sample()`

.

This is a class method that describes what key/value arguments are required to instantiate the given `Distribution`

so that a particular shape is returned for that instance's call to `sample()`

.

Subclasses should override class method `_param_shapes`

.

:`sample_shape`

`Tensor`

or python list/tuple. Desired shape of a call to`sample()`

.: name to prepend ops with.`name`

`dict`

of parameter name to `Tensor`

shapes.

`param_static_shapes`

```
param_static_shapes(
cls,
sample_shape
)
```

param_shapes with static (i.e. `TensorShape`

) shapes.

This is a class method that describes what key/value arguments are required to instantiate the given `Distribution`

so that a particular shape is returned for that instance's call to `sample()`

. Assumes that the sample's shape is known statically.

Subclasses should override class method `_param_shapes`

to return constant-valued tensors when constant values are fed.

:`sample_shape`

`TensorShape`

or python list/tuple. Desired shape of a call to`sample()`

.

`dict`

of parameter name to `TensorShape`

.

: if`ValueError`

`sample_shape`

is a`TensorShape`

and is not fully defined.

`prob`

```
prob(
value,
name='prob'
)
```

Probability density/mass function.

:`value`

`float`

or`double`

`Tensor`

.: The name to give this op.`name`

: a`prob`

`Tensor`

of shape`sample_shape(x) + self.batch_shape`

with values of type`self.dtype`

.

`quantile`

```
quantile(
value,
name='quantile'
)
```

Quantile function. Aka "inverse cdf" or "percent point function".

Given random variable `X`

and `p in [0, 1]`

, the `quantile`

is:

`quantile(p) := x such that P[X <= x] == p`

:`value`

`float`

or`double`

`Tensor`

.: The name to give this op.`name`

: a`quantile`

`Tensor`

of shape`sample_shape(x) + self.batch_shape`

with values of type`self.dtype`

.

`sample`

```
sample(
sample_shape=(),
seed=None,
name='sample'
)
```

Generate samples of the specified shape.

Note that a call to `sample()`

without arguments will generate a single sample.

: 0D or 1D`sample_shape`

`int32`

`Tensor`

. Shape of the generated samples.: Python integer seed for RNG`seed`

: name to give to the op.`name`

: a`samples`

`Tensor`

with prepended dimensions`sample_shape`

.

`stddev`

`stddev(name='stddev')`

Standard deviation.

Standard deviation is defined as,

`stddev = E[(X - E[X])**2]**0.5`

where `X`

is the random variable associated with this distribution, `E`

denotes expectation, and `stddev.shape = batch_shape + event_shape`

.

: The name to give this op.`name`

: Floating-point`stddev`

`Tensor`

with shape identical to`batch_shape + event_shape`

, i.e., the same shape as`self.mean()`

.

`survival_function`

```
survival_function(
value,
name='survival_function'
)
```

Survival function.

Given random variable `X`

, the survival function is defined:

```
survival_function(x) = P[X > x]
= 1 - P[X <= x]
= 1 - cdf(x).
```

:`value`

`float`

or`double`

`Tensor`

.: The name to give this op.`name`

`Tensor`

of shape `sample_shape(x) + self.batch_shape`

with values of type `self.dtype`

.

`value`

`value()`

Get tensor that the random variable corresponds to.

`variance`

`variance(name='variance')`

Variance.

Variance is defined as,

`Var = E[(X - E[X])**2]`

where `X`

is the random variable associated with this distribution, `E`

denotes expectation, and `Var.shape = batch_shape + event_shape`

.

: The name to give this op.`name`

: Floating-point`variance`

`Tensor`

with shape identical to`batch_shape + event_shape`

, i.e., the same shape as`self.mean()`

.