`RandomVariable`

- Class
`ed.RandomVariable`

- Class
`ed.models.RandomVariable`

Defined in `edward/models/random_variable.py`

.

Base class for random variables.

A random variable is an object parameterized by tensors. It is equipped with methods such as the log-density, mean, and sample.

It also wraps a tensor, where the tensor corresponds to a sample from the random variable. This enables operations on the TensorFlow graph, allowing random variables to be used in conjunction with other TensorFlow ops.

The random variable’s shape is given by

`sample_shape + batch_shape + event_shape`

,

where `sample_shape`

is an optional argument representing the dimensions of samples drawn from the distribution (default is a scalar); `batch_shape`

is the number of independent random variables (determined by the shape of its parameters); and `event_shape`

is the shape of one draw from the distribution (e.g., `Normal`

has a scalar `event_shape`

; `Dirichlet`

has a vector `event_shape`

).

`RandomVariable`

assumes use in a multiple inheritance setting. The child class must first inherit `RandomVariable`

, then second inherit a class in `tf.contrib.distributions`

. With Python’s method resolution order, this implies the following during initialization (using `distributions.Bernoulli`

as an example):

- Start the
`__init__()`

of the child class, which passes all`*args, **kwargs`

to`RandomVariable`

. - This in turn passes all
`*args, **kwargs`

to`distributions.Bernoulli`

, completing the`__init__()`

of`distributions.Bernoulli`

. - Complete the
`__init__()`

of`RandomVariable`

, which calls`self.sample()`

, relying on the method from`distributions.Bernoulli`

. - Complete the
`__init__()`

of the child class.

Methods from both `RandomVariable`

and `distributions.Bernoulli`

populate the namespace of the child class. Methods from `RandomVariable`

will take higher priority if there are conflicts.

```
p = tf.constant(0.5)
x = Bernoulli(p)
z1 = tf.constant([[1.0, -0.8], [0.3, -1.0]])
z2 = tf.constant([[0.9, 0.2], [2.0, -0.1]])
x = Bernoulli(logits=tf.matmul(z1, z2))
mu = Normal(tf.constant(0.0), tf.constant(1.0))
x = Normal(mu, tf.constant(1.0))
```

`sample_shape`

Sample shape of random variable.

`shape`

Shape of random variable.

**init**

```
__init__(
*args,
**kwargs
)
```

Create a new random variable.

: tf.TensorShape. Shape of samples to draw from the random variable.`sample_shape`

: tf.Tensor. Fixed tensor to associate with random variable. Must have shape`value`

`sample_shape + batch_shape + event_shape`

.: list. Optional list of graph collections (lists). The random variable is added to these collections. Defaults to`collections`

`[ed.random_variables()]`

.

**abs**

```
__abs__(
a,
*args
)
```

Computes the absolute value of a tensor.

Given a tensor `x`

of complex numbers, this operation returns a tensor of type `float32`

or `float64`

that is the absolute value of each element in `x`

. All elements in `x`

must be complex numbers of the form \(a + bj\). The absolute value is computed as \( \). For example:

```
x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])
tf.abs(x) # [5.25594902, 6.60492229]
```

: A`x`

`Tensor`

or`SparseTensor`

of type`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

or`complex128`

.: A name for the operation (optional).`name`

A `Tensor`

or `SparseTensor`

the same size and type as `x`

with absolute values. Note, for `complex64`

or `complex128`

input, the returned `Tensor`

will be of type `float32`

or `float64`

, respectively.

**add**

```
__add__(
a,
*args
)
```

Returns x + y element-wise.

*NOTE*: `Add`

supports broadcasting. `AddN`

does not. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`half`

,`bfloat16`

,`float32`

,`float64`

,`uint8`

,`int8`

,`int16`

,`int32`

,`int64`

,`complex64`

,`complex128`

,`string`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

. Has the same type as `x`

.

**and**

```
__and__(
a,
*args
)
```

Returns the truth value of x AND y element-wise.

*NOTE*: `LogicalAnd`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

of type`bool`

.: A`y`

`Tensor`

of type`bool`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**bool**

`__bool__()`

**div**

```
__div__(
a,
*args
)
```

Divide two values using Python 2 semantics. Used for Tensor.__div__.

:`x`

`Tensor`

numerator of real numeric type.:`y`

`Tensor`

denominator of real numeric type.: A name for the operation (optional).`name`

`x / y`

returns the quotient of x and y.

**eq**

`__eq__(other)`

**floordiv**

```
__floordiv__(
a,
*args
)
```

Divides `x / y`

elementwise, rounding toward the most negative integer.

The same as `tf.div(x,y)`

for integers, but uses `tf.floor(tf.div(x,y))`

for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by `x // y`

floor division in Python 3 and in Python 2.7 with `from __future__ import division`

.

Note that for efficiency, `floordiv`

uses C semantics for negative numbers (unlike Python and Numpy).

`x`

and `y`

must have the same type, and the result will have the same type as well.

:`x`

`Tensor`

numerator of real numeric type.:`y`

`Tensor`

denominator of real numeric type.: A name for the operation (optional).`name`

`x / y`

rounded down (except possibly towards zero for negative integers).

: If the inputs are complex.`TypeError`

**ge**

```
__ge__(
a,
*args
)
```

Returns the truth value of (x >= y) element-wise.

*NOTE*: `GreaterEqual`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int32`

,`uint8`

,`int16`

,`int8`

,`int64`

,`bfloat16`

,`uint16`

,`half`

,`uint32`

,`uint64`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**getitem**

```
__getitem__(
a,
*args
)
```

Overload for Tensor.__getitem__.

This operation extracts the specified region from the tensor. The notation is similar to NumPy with the restriction that currently only support basic indexing. That means that using a non-scalar tensor as input is not currently allowed.

Some useful examples:

```
# strip leading and trailing 2 elements
foo = tf.constant([1,2,3,4,5,6])
print(foo[2:-2].eval()) # => [3,4]
# skip every row and reverse every column
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[::2,::-1].eval()) # => [[3,2,1], [9,8,7]]
# Use scalar tensors as indices on both dimensions
print(foo[tf.constant(0), tf.constant(2)].eval()) # => 3
# Insert another dimension
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[:, tf.newaxis, :].eval()) # => [[[1,2,3]], [[4,5,6]], [[7,8,9]]]
print(foo[:, :, tf.newaxis].eval()) # => [[[1],[2],[3]], [[4],[5],[6]],
[[7],[8],[9]]]
# Ellipses (3 equivalent operations)
foo = tf.constant([[1,2,3], [4,5,6], [7,8,9]])
print(foo[tf.newaxis, :, :].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis, ...].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
print(foo[tf.newaxis].eval()) # => [[[1,2,3], [4,5,6], [7,8,9]]]
```

Notes: - `tf.newaxis`

is `None`

as in NumPy. - An implicit ellipsis is placed at the end of the `slice_spec`

- NumPy advanced indexing is currently not supported.

: An ops.Tensor object.`tensor`

: The arguments to Tensor.__getitem__.`slice_spec`

: In the case of variable slice assignment, the Variable object to slice (i.e. tensor is the read-only view of this variable).`var`

The appropriate slice of “tensor”, based on “slice_spec”.

: If a slice range is negative size.`ValueError`

: If the slice indices aren’t int, slice, or Ellipsis.`TypeError`

**gt**

```
__gt__(
a,
*args
)
```

Returns the truth value of (x > y) element-wise.

*NOTE*: `Greater`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int32`

,`uint8`

,`int16`

,`int8`

,`int64`

,`bfloat16`

,`uint16`

,`half`

,`uint32`

,`uint64`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**invert**

```
__invert__(
a,
*args
)
```

Returns the truth value of NOT x element-wise.

: A`x`

`Tensor`

of type`bool`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**iter**

`__iter__()`

**le**

```
__le__(
a,
*args
)
```

Returns the truth value of (x <= y) element-wise.

*NOTE*: `LessEqual`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int32`

,`uint8`

,`int16`

,`int8`

,`int64`

,`bfloat16`

,`uint16`

,`half`

,`uint32`

,`uint64`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**lt**

```
__lt__(
a,
*args
)
```

Returns the truth value of (x < y) element-wise.

*NOTE*: `Less`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`float32`

,`float64`

,`int32`

,`uint8`

,`int16`

,`int8`

,`int64`

,`bfloat16`

,`uint16`

,`half`

,`uint32`

,`uint64`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**matmul**

```
__matmul__(
a,
*args
)
```

Multiplies matrix `a`

by matrix `b`

, producing `a`

* `b`

.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication arguments, and any further outer dimensions match.

Both matrices must be of the same type. The supported types are: `float16`

, `float32`

, `float64`

, `int32`

, `complex64`

, `complex128`

.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to `True`

. These are `False`

by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding `a_is_sparse`

or `b_is_sparse`

flag to `True`

. These are `False`

by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes `bfloat16`

or `float32`

.

For example:

```
# 2-D tensor `a`
# [[1, 2, 3],
# [4, 5, 6]]
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
# 2-D tensor `b`
# [[ 7, 8],
# [ 9, 10],
# [11, 12]]
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
# `a` * `b`
# [[ 58, 64],
# [139, 154]]
c = tf.matmul(a, b)
# 3-D tensor `a`
# [[[ 1, 2, 3],
# [ 4, 5, 6]],
# [[ 7, 8, 9],
# [10, 11, 12]]]
a = tf.constant(np.arange(1, 13, dtype=np.int32),
shape=[2, 2, 3])
# 3-D tensor `b`
# [[[13, 14],
# [15, 16],
# [17, 18]],
# [[19, 20],
# [21, 22],
# [23, 24]]]
b = tf.constant(np.arange(13, 25, dtype=np.int32),
shape=[2, 3, 2])
# `a` * `b`
# [[[ 94, 100],
# [229, 244]],
# [[508, 532],
# [697, 730]]]
c = tf.matmul(a, b)
# Since python >= 3.5 the @ operator is supported (see PEP 465).
# In TensorFlow, it simply calls the `tf.matmul()` function, so the
# following lines are equivalent:
d = a @ b @ [[10.], [11.]]
d = tf.matmul(tf.matmul(a, b), [[10.], [11.]])
```

:`a`

`Tensor`

of type`float16`

,`float32`

,`float64`

,`int32`

,`complex64`

,`complex128`

and rank > 1.:`b`

`Tensor`

with same type and rank as`a`

.: If`transpose_a`

`True`

,`a`

is transposed before multiplication.: If`transpose_b`

`True`

,`b`

is transposed before multiplication.: If`adjoint_a`

`True`

,`a`

is conjugated and transposed before multiplication.: If`adjoint_b`

`True`

,`b`

is conjugated and transposed before multiplication.: If`a_is_sparse`

`True`

,`a`

is treated as a sparse matrix.: If`b_is_sparse`

`True`

,`b`

is treated as a sparse matrix.: Name for the operation (optional).`name`

A `Tensor`

of the same type as `a`

and `b`

where each inner-most matrix is the product of the corresponding matrices in `a`

and `b`

, e.g. if all transpose or adjoint attributes are `False`

:

`output`

[…, i, j] = sum_k (`a`

[…, i, k] * `b`

[…, k, j]), for all indices i, j.

: This is matrix product, not element-wise product.`Note`

: If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.`ValueError`

**mod**

```
__mod__(
a,
*args
)
```

Returns element-wise remainder of division. When `x < 0`

xor `y < 0`

is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`

.

*NOTE*: `FloorMod`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

,`bfloat16`

,`float32`

,`float64`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

. Has the same type as `x`

.

**mul**

```
__mul__(
a,
*args
)
```

Dispatches cwise mul for “Dense*Dense" and “Dense*Sparse“.

**neg**

```
__neg__(
a,
*args
)
```

Computes numerical negative value element-wise.

I.e., \(y = -x\).

: A`x`

`Tensor`

. Must be one of the following types:`half`

,`bfloat16`

,`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

,`complex128`

.: A name for the operation (optional).`name`

A `Tensor`

. Has the same type as `x`

.

**nonzero**

`__nonzero__()`

**or**

```
__or__(
a,
*args
)
```

Returns the truth value of x OR y element-wise.

*NOTE*: `LogicalOr`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

of type`bool`

.: A`y`

`Tensor`

of type`bool`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**pow**

```
__pow__(
a,
*args
)
```

Computes the power of one value to another.

Given a tensor `x`

and a tensor `y`

, this operation computes \(x^y\) for corresponding elements in `x`

and `y`

. For example:

```
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
```

: A`x`

`Tensor`

of type`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

, or`complex128`

.: A`y`

`Tensor`

of type`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

, or`complex128`

.: A name for the operation (optional).`name`

A `Tensor`

.

**radd**

```
__radd__(
a,
*args
)
```

Returns x + y element-wise.

*NOTE*: `Add`

supports broadcasting. `AddN`

does not. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`half`

,`bfloat16`

,`float32`

,`float64`

,`uint8`

,`int8`

,`int16`

,`int32`

,`int64`

,`complex64`

,`complex128`

,`string`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

. Has the same type as `x`

.

**rand**

```
__rand__(
a,
*args
)
```

Returns the truth value of x AND y element-wise.

*NOTE*: `LogicalAnd`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

of type`bool`

.: A`y`

`Tensor`

of type`bool`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**rdiv**

```
__rdiv__(
a,
*args
)
```

Divide two values using Python 2 semantics. Used for Tensor.__div__.

:`x`

`Tensor`

numerator of real numeric type.:`y`

`Tensor`

denominator of real numeric type.: A name for the operation (optional).`name`

`x / y`

returns the quotient of x and y.

**rfloordiv**

```
__rfloordiv__(
a,
*args
)
```

Divides `x / y`

elementwise, rounding toward the most negative integer.

The same as `tf.div(x,y)`

for integers, but uses `tf.floor(tf.div(x,y))`

for floating point arguments so that the result is always an integer (though possibly an integer represented as floating point). This op is generated by `x // y`

floor division in Python 3 and in Python 2.7 with `from __future__ import division`

.

Note that for efficiency, `floordiv`

uses C semantics for negative numbers (unlike Python and Numpy).

`x`

and `y`

must have the same type, and the result will have the same type as well.

:`x`

`Tensor`

numerator of real numeric type.:`y`

`Tensor`

denominator of real numeric type.: A name for the operation (optional).`name`

`x / y`

rounded down (except possibly towards zero for negative integers).

: If the inputs are complex.`TypeError`

**rmatmul**

```
__rmatmul__(
a,
*args
)
```

Multiplies matrix `a`

by matrix `b`

, producing `a`

* `b`

.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication arguments, and any further outer dimensions match.

Both matrices must be of the same type. The supported types are: `float16`

, `float32`

, `float64`

, `int32`

, `complex64`

, `complex128`

.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to `True`

. These are `False`

by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding `a_is_sparse`

or `b_is_sparse`

flag to `True`

. These are `False`

by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes `bfloat16`

or `float32`

.

For example:

```
# 2-D tensor `a`
# [[1, 2, 3],
# [4, 5, 6]]
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
# 2-D tensor `b`
# [[ 7, 8],
# [ 9, 10],
# [11, 12]]
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
# `a` * `b`
# [[ 58, 64],
# [139, 154]]
c = tf.matmul(a, b)
# 3-D tensor `a`
# [[[ 1, 2, 3],
# [ 4, 5, 6]],
# [[ 7, 8, 9],
# [10, 11, 12]]]
a = tf.constant(np.arange(1, 13, dtype=np.int32),
shape=[2, 2, 3])
# 3-D tensor `b`
# [[[13, 14],
# [15, 16],
# [17, 18]],
# [[19, 20],
# [21, 22],
# [23, 24]]]
b = tf.constant(np.arange(13, 25, dtype=np.int32),
shape=[2, 3, 2])
# `a` * `b`
# [[[ 94, 100],
# [229, 244]],
# [[508, 532],
# [697, 730]]]
c = tf.matmul(a, b)
# Since python >= 3.5 the @ operator is supported (see PEP 465).
# In TensorFlow, it simply calls the `tf.matmul()` function, so the
# following lines are equivalent:
d = a @ b @ [[10.], [11.]]
d = tf.matmul(tf.matmul(a, b), [[10.], [11.]])
```

:`a`

`Tensor`

of type`float16`

,`float32`

,`float64`

,`int32`

,`complex64`

,`complex128`

and rank > 1.:`b`

`Tensor`

with same type and rank as`a`

.: If`transpose_a`

`True`

,`a`

is transposed before multiplication.: If`transpose_b`

`True`

,`b`

is transposed before multiplication.: If`adjoint_a`

`True`

,`a`

is conjugated and transposed before multiplication.: If`adjoint_b`

`True`

,`b`

is conjugated and transposed before multiplication.: If`a_is_sparse`

`True`

,`a`

is treated as a sparse matrix.: If`b_is_sparse`

`True`

,`b`

is treated as a sparse matrix.: Name for the operation (optional).`name`

A `Tensor`

of the same type as `a`

and `b`

where each inner-most matrix is the product of the corresponding matrices in `a`

and `b`

, e.g. if all transpose or adjoint attributes are `False`

:

`output`

[…, i, j] = sum_k (`a`

[…, i, k] * `b`

[…, k, j]), for all indices i, j.

: This is matrix product, not element-wise product.`Note`

: If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.`ValueError`

**rmod**

```
__rmod__(
a,
*args
)
```

Returns element-wise remainder of division. When `x < 0`

xor `y < 0`

is

true, this follows Python semantics in that the result here is consistent with a flooring divide. E.g. `floor(x / y) * y + mod(x, y) = x`

.

*NOTE*: `FloorMod`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`int32`

,`int64`

,`bfloat16`

,`float32`

,`float64`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

. Has the same type as `x`

.

**rmul**

```
__rmul__(
a,
*args
)
```

Dispatches cwise mul for “Dense*Dense" and “Dense*Sparse“.

**ror**

```
__ror__(
a,
*args
)
```

Returns the truth value of x OR y element-wise.

*NOTE*: `LogicalOr`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

of type`bool`

.: A`y`

`Tensor`

of type`bool`

.: A name for the operation (optional).`name`

A `Tensor`

of type `bool`

.

**rpow**

```
__rpow__(
a,
*args
)
```

Computes the power of one value to another.

Given a tensor `x`

and a tensor `y`

, this operation computes \(x^y\) for corresponding elements in `x`

and `y`

. For example:

```
x = tf.constant([[2, 2], [3, 3]])
y = tf.constant([[8, 16], [2, 3]])
tf.pow(x, y) # [[256, 65536], [9, 27]]
```

: A`x`

`Tensor`

of type`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

, or`complex128`

.: A`y`

`Tensor`

of type`float32`

,`float64`

,`int32`

,`int64`

,`complex64`

, or`complex128`

.: A name for the operation (optional).`name`

A `Tensor`

.

**rsub**

```
__rsub__(
a,
*args
)
```

Returns x - y element-wise.

*NOTE*: `Subtract`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`half`

,`bfloat16`

,`float32`

,`float64`

,`uint8`

,`int8`

,`uint16`

,`int16`

,`int32`

,`int64`

,`complex64`

,`complex128`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

. Has the same type as `x`

.

**rtruediv**

```
__rtruediv__(
a,
*args
)
```

**rxor**

```
__rxor__(
a,
*args
)
```

x ^ y = (x | y) & ~(x & y).

**sub**

```
__sub__(
a,
*args
)
```

Returns x - y element-wise.

*NOTE*: `Subtract`

supports broadcasting. More about broadcasting here

: A`x`

`Tensor`

. Must be one of the following types:`half`

,`bfloat16`

,`float32`

,`float64`

,`uint8`

,`int8`

,`uint16`

,`int16`

,`int32`

,`int64`

,`complex64`

,`complex128`

.: A`y`

`Tensor`

. Must have the same type as`x`

.: A name for the operation (optional).`name`

A `Tensor`

. Has the same type as `x`

.

**truediv**

```
__truediv__(
a,
*args
)
```

**xor**

```
__xor__(
a,
*args
)
```

x ^ y = (x | y) & ~(x & y).

`eval`

```
eval(
session=None,
feed_dict=None
)
```

In a session, computes and returns the value of this random variable.

This is not a graph construction method, it does not add ops to the graph.

This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used.

: tf.BaseSession. The`session`

`tf.Session`

to use to evaluate this random variable. If none, the default session is used.: dict. A dictionary that maps`feed_dict`

`tf.Tensor`

objects to feed values. See`tf.Session.run()`

for a description of the valid feed values.

```
x = Normal(0.0, 1.0)
with tf.Session() as sess:
# Usage passing the session explicitly.
print(x.eval(sess))
# Usage with the default session. The 'with' block
# above makes 'sess' the default session.
print(x.eval())
```

`get_ancestors`

`get_ancestors(collection=None)`

Get ancestor random variables.

`get_blanket`

`get_blanket(collection=None)`

Get the random variable’s Markov blanket.

`get_children`

`get_children(collection=None)`

Get child random variables.

`get_descendants`

`get_descendants(collection=None)`

Get descendant random variables.

`get_parents`

`get_parents(collection=None)`

Get parent random variables.

`get_shape`

`get_shape()`

Get shape of random variable.

`get_siblings`

`get_siblings(collection=None)`

Get sibling random variables.

`get_variables`

`get_variables(collection=None)`

Get TensorFlow variables that the random variable depends on.

`value`

`value()`

Get tensor that the random variable corresponds to.

**array_priority**