Probabilistic Deep Learning with Probabilistic Neural Networks and Deep Probabilistic Models

May 31, 2021

Daniel T. Chang

Introduction

Standard deep learning is deterministic and fails to accommodate for uncertainty, either in the model or data, which severely hinders its efficacy in mission-critical and real-world applications such as biomedical applications. Probabilistic deep learning introduces a solution to this constraint by allowing for uncertainty. This unique approach to deep learning is established on the use of probabilistic models combined with deep neural networks.

In this blog, we will look at two approaches to probabilistic deep learning: probabilistic neural networks and deep probabilistic models. The first one implements deep neural networks that employ probabilistic layers, which can process and represent uncertainty. The latter uses probabilistic models that incorporate deep neural network components, which capture complex non-linear stochastic relationships between random variables.

Probabilistic Neural Networks

Our discussion on probabilistic neural networks will focus on two distinct examples: Bayesian neural networks and mixture density networks. Bayesian neural networks make use of probabilistic layers that record uncertainty over weights and activations. They are trained using Bayesian inference. Mixture density networks, on the other hand, are devised to quantify neural network prediction uncertainty. These consist of two components: a mixture model and a deep neural network.

Deep Probabilistic Models

Next, we will delve into deep probabilistic models, majorly focusing on variational autoencoders, deep Gaussian processes, and deep mixed effects models. Variational autoencoders are prescribed deep generative models, which are realized using an autoencoder neural network. This type of network consists of an encoder network and a decoder network. The former encodes a data sample to a latent representation, and the latter generates data samples from the latent space.

Deep Gaussian processes are multi-layer generalizations of Gaussian processes that capture the compositional nature of deterministic deep learning while minimizing some of the disadvantages through a Bayesian approach. Deep mixed effects models, are extensions to linear mixed effects models whereby the non-linear mappings between the response variable and predictor variables are modeled using deep neural networks.

TensorFlow Probability Library

TensorFlow Probability (TFP) is a comprehensive library for probabilistic modeling and inference. It is extremely beneficial in offering both, probabilistic neural networks and deep probabilistic models. Examples of its code for illustration will be included in this blog.

Background Information

Uncertainty arises because of several limitations such as our limited capability to observe the world, model it, and possible inherent non-determinism like quantum phenomena. From a deep learning perspective, there are two types of uncertainties: model uncertainty and data uncertainty.

Probabilistic models use probability theory to represent all forms of uncertainty and for making inferences. Probability distributions are used to represent various uncertain elements in a model and how they are related to the data. The process of drawing inference from data by implementing probability theory is referred to as Bayesian learning or Bayesian inference.

Bayesian Inference

The main idea of Bayesian inference is to infer a posterior distribution over the parameters of a probabilistic model given some observed data using Bayes theorem. Computing the posterior, and even more so sampling from it, is usually intractable, especially because computing the evidence (integrals) is hard. Approximate inference methods are usually used; they include Markov Chain Monte Carlo, which approximates integrals via sampling, and Variational Inference, that approximates integrals via optimization.

TensorFlow Probability (TFP)

TensorFlow Probability (TFP) is a library for probabilistic modeling and inference in TensorFlow. It provides integration of probabilistic models with deep neural networks, gradient-based inference via automatic differentiation, and scalability to large datasets and models via hardware acceleration such as GPU, and distributed computation. It offers a comprehensive structure that includes TensorFlow for numerical operations, probabilistic building blocks, and probabilistic model building.

Conclusion

Probabilistic neural networks represent a paradigm shift in machine learning, allowing us to model and encode uncertainty. This is of vital importance in a world full of complexities and vulnerabilities that are difficult to quantify. Coupled with other cutting-edge technology, probabilistic neural networks and deep probabilistic models can harness the power of AI to build robust, optimum solutions that can tackle real-world problems.

Sign up to AI First Newsletter

Recommended

We use our own cookies as well as third-party cookies on our websites to enhance your experience, analyze our traffic, and for security and marketing. Select "Accept All" to allow them to be used. Read our Cookie Policy.