Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Title Page Copyright and Credits
Hands-On Convolutional Neural Networks with TensorFlow
Packt Upsell
Why subscribe? PacktPub.com
Contributors
About the authors Packt is searching for authors like you
Preface
Who this book is for What this book covers To get the most out of this book
Download the example code files Conventions used
Get in touch
Reviews
Setup and Introduction to TensorFlow
The TensorFlow way of thinking Setting up and installing TensorFlow
Conda environments Checking whether your installation works
TensorFlow API levels
Eager execution
Building your first TensorFlow model
One-hot vectors Splitting into training and test sets
Creating TensorFlow graphs
Variables Operations
Feeding data with placeholders Initializing variables Training our model
Loss functions Optimization
Evaluating a trained model The session Summary
Deep Learning and Convolutional Neural Networks
AI and ML
Types of ML Old versus new ML
Artificial neural networks
Activation functions The XOR problem Training neural networks Backpropagation and the chain rule Batches Loss functions The optimizer and its hyperparameters
Underfitting versus overfitting
Feature scaling Fully connected layers A TensorFlow example for the XOR problem
Convolutional neural networks
Convolution Input padding Calculating the number of parameters (weights)  Calculating the number of operations Converting convolution layers into fully connected layers The pooling layer 1x1 Convolution Calculating the receptive field
Building a CNN model in TensorFlow
TensorBoard Other types of convolutions
Summary
Image Classification in TensorFlow
CNN model architecture
Cross-entropy loss (log loss) Multi-class cross entropy loss The train/test dataset split
Datasets
ImageNet CIFAR Loading CIFAR
Image classification with TensorFlow
Building the CNN graph Learning rate scheduling Introduction to the tf.data API The main training loop
Model Initialization
Do not initialize all weights with zeros Initializing with a mean zero distribution
Xavier-Bengio and the Initializer
Improving generalization by regularizing
L2 and L1 regularization Dropout The batch norm layer
Summary
Object Detection and Segmentation
Image classification with localization
Localization as regression TensorFlow implementation Other applications of localization
Object detection as classification – Sliding window
Using heuristics to guide us (R-CNN)
Problems
Fast R-CNN Faster R-CNN
Region Proposal Network RoI Pooling layer
Conversion from traditional CNN to Fully Convnets
Single Shot Detectors – You Only Look Once
Creating training set for Yolo object detection Evaluating detection (Intersection Over Union) Filtering output Anchor Box Testing/Predicting in Yolo Detector Loss function (YOLO loss)
Loss Part 1 Loss Part 2 Loss Part 3
Semantic segmentation
Max Unpooling Deconvolution layer (Transposed convolution) The loss function Labels Improving results
Instance segmentation
Mask R-CNN
Summary
VGG, Inception Modules, Residuals, and MobileNets
Substituting big convolutions
Substituting the 3x3 convolution
VGGNet
Architecture Parameters and memory calculation Code More about VGG
GoogLeNet
Inception module More about GoogLeNet
Residual Networks MobileNets
Depthwise separable convolution Control parameters More about MobileNets
Summary
Autoencoders, Variational Autoencoders, and Generative Adversarial Networks
Why generative models Autoencoders
Convolutional autoencoder example Uses and limitations of autoencoders
Variational autoencoders
Parameters to define a normal distribution VAE loss function
Kullback-Leibler divergence
Training the VAE The reparameterization trick Convolutional Variational Autoencoder code Generating new data
Generative adversarial networks
The discriminator The generator GAN loss function
Generator loss Discriminator loss Putting the losses together
Training the GAN Deep convolutional GAN WGAN BEGAN Conditional GANs Problems with GANs
Loss interpretability Mode collapse
Techniques to improve GANs' trainability
Minibatch discriminator
Summary
Transfer Learning
When? How? An overview How? Code example
TensorFlow useful elements
An autoencoder without the decoder Selecting layers Training only some layers
Complete source
Summary
Machine Learning Best Practices and Troubleshooting
Building Machine Learning Systems Data Preparation
Split of Train/Development/Test set Mismatch of the Dev and Test set When to Change Dev/Test Set
Bias and Variance Data Imbalance
Collecting more data Look at your performance metric Data synthesis/Augmentation Resample Data Loss function Weighting
Evaluation Metrics Code Structure best Practice
Singleton Pattern
Recipe for CNN creation Summary
Training at Scale
Storing data in TFRecords
Making a TFRecord
Storing encoded images Sharding
Making efficient pipelines
Parallel calls for map transformations
Getting a batch Prefetching Tracing your graph
Distributed computing in TensorFlow
Model/data parallelism
Synchronous/asynchronous SGD
When data does not fit on one computer
The advantages of NoSQL systems
Installing Cassandra (Ubuntu 16.04) The CQLSH tool Creating databases, tables, and indexes Doing queries in Python Populating tables in Python Doing backups
Scaling computation in the cloud
EC2
AMI
Storage (S3)
SageMaker
Summary
References
Chapter 1 Chapter 2 Chapter 3 Chapter 4 Chapter 5 Chapter 7 Chapter 9
Other Books You May Enjoy
Leave a review - let other readers know what you think
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion