Edward provides a testbed for rapid experimentation and research with probabilistic models. Here we show how to apply this process for diverse learning tasks.

Bayesian linear regression

A fundamental model for supervised learning.

Batch training

How to train a model using only minibatches of data at a time.

TensorBoard

Visualize learning, explore the computational graph, and diagnose training problems.

Linear mixed effects models

Linear modeling of fixed and random effects.

Gaussian process classification

Learning a distribution over functions for supervised classification.

Mixture models

Unsupervised learning by clustering data points.

Latent space models

Analyzing connectivity patterns in neural data.

Mixture density networks

A neural density estimator for solving inverse problems.

Generative adversarial networks

Building a deep generative model of MNIST digits.

Probabilistic decoder

A model of latent codes in information theory.

Inference networks

How to amortize computation for training and testing models.

Bayesian neural network

Bayesian analysis with neural networks.

Probabilistic PCA

Dimensionality reduction with latent variables.

If you’re interested in contributing a tutorial, checking out the contributing page. For more background and notation, see the pages below.

There are also companion webpages for several papers about Edward.