Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Machine Learning with TensorFlow, 2e Copyright dedication Praise for the First Edition front matter
foreword preface acknowledgments about this book How this book is organized: A roadmap About the code liveBook discussion forum about the author about the cover illustration
contents Part 1 Your machine-learning rig 1 A machine-learning odyssey
1.1 Machine-learning fundamentals
1.1.1 Parameters 1.1.2 Learning and inference
1.2 Data representation and features 1.3 Distance metrics 1.4 Types of learning
1.4.1 Supervised learning 1.4.2 Unsupervised learning 1.4.3 Reinforcement learning 1.4.4 Meta-learning
1.5 TensorFlow 1.6 Overview of future chapters Summary
2 TensorFlow essentials
2.1 Ensuring that TensorFlow works 2.2 Representing tensors 2.3 Creating operators 2.4 Executing operators within sessions 2.5 Understanding code as a graph
2.5.1 Setting session configurations
2.6 Writing code in Jupyter 2.7 Using variables 2.8 Saving and loading variables 2.9 Visualizing data using TensorBoard
2.9.1 Implementing a moving average 2.9.2 Visualizing the moving average
2.10 Putting it all together: The TensorFlow system architecture and API Summary
Part 2 Core learning algorithms 3 Linear regression and beyond
3.1 Formal notation
3.1.1 How do you know the regression algorithm is working?
3.2 Linear regression 3.3 Polynomial model 3.4 Regularization 3.5 Application of linear regression Summary
4 Using regression for call-center volume prediction
4.1 What is 311? 4.2 Cleaning the data for regression 4.3 What’s in a bell curve? Predicting Gaussian distributions 4.4 Training your call prediction regressor 4.5 Visualizing the results and plotting the error 4.6 Regularization and training test splits Summary
5 A gentle introduction to classification
5.1 Formal notation 5.2 Measuring performance
5.2.1 Accuracy 5.2.2 Precision and recall 5.2.3 Receiver operating characteristic curve
5.3 Using linear regression for classification 5.4 Using logistic regression
5.4.1 Solving 1D logistic regression 5.4.2 Solving 2D regression
5.5 Multiclass classifier
5.5.1 One-versus-all 5.5.2 One-versus-one 5.5.3 Softmax regression
5.6 Application of classification Summary
6 Sentiment classification: Large movie-review dataset
6.1 Using the Bag of Words model
6.1.1 Applying the Bag of Words model to movie reviews 6.1.2 Cleaning all the movie reviews 6.1.3 Exploratory data analysis on your Bag of Words
6.2 Building a sentiment classifier using logistic regression
6.2.1 Setting up the training for your model 6.2.2 Performing the training for your model
6.3 Making predictions using your sentiment classifier 6.4 Measuring the effectiveness of your classifier 6.5 Creating the softmax-regression sentiment classifier 6.6 Submitting your results to Kaggle Summary
7 Automatically clustering data
7.1 Traversing files in TensorFlow 7.2 Extracting features from audio 7.3 Using k-means clustering 7.4 Segmenting audio 7.5 Clustering with a self-organizing map 7.6 Applying clustering Summary
8 Inferring user activity from Android accelerometer data
8.1 The User Activity from Walking dataset
8.1.1 Creating the dataset 8.1.2 Computing jerk and extracting the feature vector
8.2 Clustering similar participants based on jerk magnitudes 8.3 Different classes of user activity for a single participant Summary
9 Hidden Markov models
9.1 Example of a not-so-interpretable model 9.2 Markov model 9.3 Hidden Markov model 9.4 Forward algorithm 9.5 Viterbi decoding 9.6 Uses of HMMs
9.6.1 Modeling a video 9.6.2 Modeling DNA 9.6.3 Modeling an image
9.7 Application of HMMs Summary
10 Part-of-speech tagging and word-sense disambiguation
10.1 Review of HMM example: Rainy or Sunny 10.2 PoS tagging
10.2.1 The big picture: Training and predicting PoS with HMMs 10.2.2 Generating the ambiguity PoS tagged dataset
10.3 Algorithms for building the HMM for PoS disambiguation
10.3.1 Generating the emission probabilities
10.4 Running the HMM and evaluating its output 10.5 Getting more training data from the Brown Corpus 10.6 Defining error bars and metrics for PoS tagging Summary
Part 3 The neural network paradigm 11 A peek into autoencoders
11.1 Neural networks 11.2 Autoencoders 11.3 Batch training 11.4 Working with images 11.5 Application of autoencoders Summary
12 Applying autoencoders: The CIFAR-10 image dataset
12.1 What is CIFAR-10?
12.1.1 Evaluating your CIFAR-10 autoencoder
12.2 Autoencoders as classifiers
12.2.1 Using the autoencoder as a classifier via loss
12.3 Denoising autoencoders 12.4 Stacked deep autoencoders Summary
13 Reinforcement learning
13.1 Formal notions
13.1.1 Policy 13.1.2 Utility
13.2 Applying reinforcement learning 13.3 Implementing reinforcement learning 13.4 Exploring other applications of reinforcement learning Summary
14 Convolutional neural networks
14.1 Drawback of neural networks 14.2 Convolutional neural networks 14.3 Preparing the image
14.3.1 Generating filters 14.3.2 Convolving using filters 14.3.3 Max pooling
14.4 Implementing a CNN in TensorFlow
14.4.1 Measuring performance 14.4.2 Training the classifier
14.5 Tips and tricks to improve performance 14.6 Application of CNNs Summary
15 Building a real-world CNN: VGG -Face and VGG -Face Lite
15.1 Making a real-world CNN architecture for CIFAR-10
15.1.1 Loading and preparing the CIFAR-10 image data 15.1.2 Performing data augmentation
15.2 Building a deeper CNN architecture for CIFAR-10
15.2.1 CNN optimizations for increasing learned parameter resilience
15.3 Training and applying a better CIFAR-10 CNN 15.4 Testing and evaluating your CNN for CIFAR-10
15.4.1 CIFAR-10 accuracy results and ROC curves 15.4.2 Evaluating the softmax predictions per class
15.5 Building VGG -Face for facial recognition
15.5.1 Picking a subset of VGG -Face for training VGG -Face Lite 15.5.2 TensorFlow’s Dataset API and data augmentation 15.5.3 Creating a TensorFlow dataset 15.5.4 Training using TensorFlow datasets 15.5.5 VGG -Face Lite model and training 15.5.6 Training and evaluating VGG -Face Lite 15.5.7 Evaluating and predicting with VGG -Face Lite
Summary
16 Recurrent neural networks
16.1 Introduction to RNNs 16.2 Implementing a recurrent neural network 16.3 Using a predictive model for time-series data 16.4 Applying RNNs Summary
17 LSTMs and automatic speech recognition
17.1 Preparing the LibriSpeech corpus
17.1.1 Downloading, cleaning, and preparing LibriSpeech OpenSLR data 17.1.2 Converting the audio 17.1.3 Generating per-audio transcripts 17.1.4 Aggregating audio and transcripts
17.2 Using the deep-speech model
17.2.1 Preparing the input audio data for deep speech 17.2.2 Preparing the text transcripts as character-level numerical data 17.2.3 The deep-speech model in TensorFlow 17.2.4 Connectionist temporal classification in TensorFlow
17.3 Training and evaluating deep speech Summary
18 Sequence-to-sequence models for chatbots
18.1 Building on classification and RNNs 18.2 Understanding seq2seq architecture 18.3 Vector representation of symbols 18.4 Putting it all together 18.5 Gathering dialogue data Summary
19 Utility landscape
19.1 Preference model 19.2 Image embedding 19.3 Ranking images Summary What’s next
appendix Installation instructions
A.1 Installing the book’s code with Docker
A.1.1 Installing Docker in Windows A.1.2 Installing Docker in Linux A.1.3 Installing Docker in macOS A.1.4 Using Docker
A.2 Getting the data and storing models A.3 Necessary libraries A.4 Converting the call-center example to TensorFlow2
A.4.1 The call-center example with TF2
index
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion