Log In
Or create an account ->
Imperial Library
Home
About
News
Upload
Forum
Help
Login/SignUp
Index
Copyright 2020 Manning Publications
welcome
Table of contents
Part 1: Basics of Deep Learning
1 Introduction to probabilistic deep learning
1.1 A first look at probabilistic models
1.2 A first brief look at deep learning
1.2.1 A success story
1.3 Classification
1.3.1 Traditional approach to image classification
1.3.2 Deep learning approach to image classification
1.3.3 Non-probabilistic classification
1.3.4 Probabilistic classification
1.3.5 Bayesian probabilistic classification
1.4 Curve fitting
1.4.1 Non-probabilistic curve fitting
1.4.2 Probabilistic curve fitting
1.4.3 Bayesian probabilistic curve fitting
1.5 When to use and when not to use DL?
1.5.1 When not to use DL
1.5.2 When to use DL
1.5.3 When to use and when not to use probabilistic models?
1.6 What you’ll learn in this book
1.7 Summary
2 Neural network architectures
2.1 Fully connected neural networks
2.1.1 The biology that inspired the design of artificial NNs
2.1.2 Getting started with implementing an NN
2.1.3 Using a fully connected NN to classify images
2.2 Convolutional NNs for image-like data
2.2.1 Main ideas in a CNN architecture
2.2.2 A minimal CNN for edge lovers
2.2.3 Biological inspiration for a CNN architecture
2.2.4 Building and understanding a CNN
2.3 One dimensional CNNs for ordered data
2.3.1 Format of time-ordered data
2.3.2 What’s special about ordered data?
2.3.3 Architectures for time-ordered data
2.4 Summary.
3 Principles of curve fitting
3.1 “Hello world” in curve fitting
3.1.1 Fitting a linear regression model based on a loss function
3.2 Gradient descent method
3.2.1 Loss with one free model parameter
3.2.2 Loss with two free-model parameters
3.3 Special DL sauce
3.3.1 Mini-batch gradient descent
3.3.2 Using SGD variants to speed up the learning
3.3.3 Automatic differentiation
3.4 Backpropagation in DL frameworks
3.4.1 Static graph frameworks
3.4.2 Dynamic graph frameworks
3.5 Summary
Part 2: Maximum Likelihood approaches for probabilistic DL models
4 Building loss functions with the likelihood approach
4.1 Introduction to the maximum likelihood principle, the mother of all loss functions
4.2 Deriving a loss function for a classification problem
4.2.1 Binary classification problem
4.2.2 Classification problems with more than two classes
4.2.3 Relationship between NLL, cross entropy and Kulback-Leilber divergence
4.3 Deriving a loss function for regression problems
4.3.1 Using a NN without hidden layers and one output neuron for modeling a linear relationship between input and output
4.3.2 Using a NN with hidden layers to model non-linear relations between input and output
4.3.3 Using a NN with an additional output for regression tasks with non-constant variance
4.4 Summary
5 Probabilistic deep learning models with TensorFlow Probability
5.1 Evaluating and comparing different probabilistic prediction models
5.2 Introduction to Tensorflow probability (TFP)
5.3 Modeling continuous data with TFP
5.3.1 Fitting and evaluating a linear regression model with constant variance
5.3.2 Fitting and evaluating a linear regression model with a non-constant standard deviation
5.4 Modeling count data with Tensorflow probability
5.4.1 The Poisson Distribution for count data
5.4.2 Extending the Poisson distribution to a zero-inflated Poisson (ZIP) distribution
5.5 Summary
6 Probabilistic deep learning models in the wild
6.1 Flexible probability distributions in state-of-the-art DL models
6.1.1 Multinomial distribution as a flexible distribution
6.1.2 Making sense of discretized logistic mixture
6.2 Case study: Bavarian roadkills
6.3 Go with the flow, introduction to normalizing flows (NFs)
6.3.1 The principle idea of NFs
6.3.2 The change of variable technique for probabilities
6.3.3 Fitting an NF to data
6.3.4 Going deeper by chaining flows
6.3.5 Transformation between higher dimensional spaces*
6.3.6 Using networks to control flows
6.3.7 Fun with flows: Sampling faces
6.4 Summary
Part 3: Bayesian approaches for probabilistic DL models
7 Bayesian learning
7.1 What’s wrong with non-Bayesian DL: The elephant in the room
7.2 The first encounter with Bayesian Approach
7.2.1 Bayesian model: The hackers' way
7.2.2 What did we just do?
7.3 The Bayesian approach for probabilistic models
7.3.1 Training and prediction with a Bayesian model
7.3.2 A coin toss as a Hello World example for Bayesian models
7.3.3 Revisiting the Bayesian linear regression model
7.4 Summary
8 Bayesian neural networks
8.1 Bayesian neural networks (BNNs)
8.2 Variational Inference (VI) as an approximative Bayes approach
8.2.1 Looking under the hood of VI*
8.2.2 Applying VI to the toy problem*
8.3 Variational Inference with TensorFlow Probability
8.4 MC dropout as an approximate Bayes approach
8.4.1 Classical Dropout used during training
8.4.2 MC dropout used during train and test times
8.5 Case Studies
8.5.1 Regression case study on extrapolation
8.5.2 Classification case study with novel classes
8.6 Summary
A Glossary of terms and abbreviations
← Prev
Back
Next →
← Prev
Back
Next →