Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Title Page Copyright
Neural Network Programming with TensorFlow
Credits About the Authors About the Reviewer www.PacktPub.com
Why subscribe?
Customer Feedback Preface
What this book covers What you need for this book Who this book is for Conventions Reader feedback Customer support
Downloading the example code Downloading the color images of this book Errata Piracy Questions
Maths for Neural Networks
Understanding linear algebra
Environment setup
Setting up the Python environment in Pycharm
Linear algebra structures
Scalars, vectors, and matrices Tensors
Operations
Vectors Matrices Matrix multiplication Trace operator Matrix transpose Matrix diagonals Identity matrix Inverse matrix
Solving linear equations Singular value decomposition Eigenvalue decomposition Principal Component Analysis
Calculus
Gradient Hessian Determinant
Optimization
Optimizers
Summary
Deep Feedforward Networks
Defining feedforward networks Understanding backpropagation Implementing feedforward networks with TensorFlow Analyzing the Iris dataset
Code execution
Implementing feedforward networks with images
Analyzing the effect of activation functions on the feedforward networks accuracy
Summary
Optimization for Neural Networks
What is optimization? Types of optimizers Gradient descent
Different variants of gradient descent Algorithms to optimize gradient descent
Which optimizer to choose
Optimization with an example
Summary
Convolutional Neural Networks
An overview and the intuition of CNN
Single Conv Layer Computation CNN in TensorFlow
Image loading in TensorFlow
Convolution operations
Convolution on an image Strides
Pooling
Max pool Example code
Average pool
Image classification with convolutional networks
Defining a tensor for input images and the first convolution layer
Input tensor First convolution layer Second convolution layer Third convolution layer Flatten the layer Fully connected layers Defining cost and optimizer Optimizer First epoch Plotting filters and their effects on an image
Summary
Recurrent Neural Networks
Introduction to RNNs
RNN implementation
Computational graph
RNN implementation with TensorFlow
Computational graph
Introduction to long short term memory networks
Life cycle of LSTM LSTM implementation
Computational graph
Sentiment analysis
Word embeddings Sentiment analysis with an RNN
Computational graph
Summary
Generative Models
Generative models
Discriminative versus generative models Types of generative models
Autoencoders GAN Sequence models
GANs
GAN with an example Types of GANs
Vanilla GAN Conditional GAN Info GAN Wasserstein GAN Coupled GAN
Summary
Deep Belief Networking
Understanding deep belief networks
DBN implementation
Class initialization RBM class
Pretraining the DBN
Model training Predicting the label Finding the accuracy of the model DBN implementation for the MNIST dataset
Loading the dataset Input parameters for a DBN with 256-Neuron RBM layers Output for a DBN with 256-neuron RBN layers
Effect of the number of neurons in an RBM layer in a DBN
An RBM layer with 512 neurons An RBM layer with 128 neurons Comparing the accuracy metrics
DBNs with two RBM layers Classifying the NotMNIST dataset with a DBN Summary
Autoencoders
Autoencoder algorithms Under-complete autoencoders Dataset Basic autoencoders
Autoencoder initialization AutoEncoder class Basic autoencoders with MNIST data
Basic autoencoder plot of weights Basic autoencoder recreated images plot
Basic autoencoder full code listing Basic autoencoder summary
Additive Gaussian Noise autoencoder
Autoencoder class Additive Gaussian Autoencoder with the MNIST dataset
Training the model Plotting the weights
Plotting the reconstructed images Additive Gaussian autoencoder full code listing Comparing basic encoder costs with the Additive Gaussian Noise autoencoder Additive Gaussian Noise autoencoder summary
Sparse autoencoder
KL divergence
KL divergence in TensorFlow
Cost of a sparse autoencoder based on KL Divergence
Complete code listing of the sparse autoencoder Sparse autoencoder on MNIST data Comparing the Sparse encoder with the Additive Gaussian Noise encoder
Summary
Research in Neural Networks
Avoiding overfitting in neural networks
Problem statement Solution Results
Large-scale video processing with neural networks
Resolution improvements Feature histogram baselines Quantitative results
Named entity recognition using a twisted neural network
Example of a named entity recognition Defining Twinet Results
Bidirectional RNNs
BRNN on TIMIT dataset
Summary
Getting started with TensorFlow
Environment setup TensorFlow comparison with Numpy Computational graph
Graph Session objects Variables Scope Data input Placeholders and feed dictionaries
Auto differentiation TensorBoard
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion