Log In
Or create an account -> 
Imperial Library
  • Home
  • About
  • News
  • Upload
  • Forum
  • Help
  • Login/SignUp

Index
Title Page Copyright and Credits
What's New in TensorFlow 2.0
Contributors
About the authors About the reviewers Packt is searching for authors like you
About Packt
Why subscribe?
Preface
Who this book is for What this book covers To get the most out of this book
Download the example code files Download the color images Conventions used
Get in touch
Reviews
Section 1: TensorFlow 2.0 - Architecture and API Changes Getting Started with TensorFlow 2.0
Technical requirements What's new?
Changes from TF 1.x
TF 2.0 installation and setup
Installing and using pip Using Docker GPU installation
Installing using Docker Installing using pip
Using TF 2.0 Rich extensions
Ragged Tensors
What are Ragged Tensors, really? Constructing a Ragged Tensor Basic operations on Ragged Tensors
New and important packages
Summary
Keras Default Integration and Eager Execution
Technical requirements New abstractions in TF 2.0 Diving deep into the Keras API
What is Keras? Building models
The Keras layers API Simple model building using the Sequential API Advanced model building using the functional API
Training models Saving and loading models
Loading and saving architecture and weights separately
Loading and saving architectures Loading and saving weights
Saving and loading entire models
Using Keras Using the SavedModel API
Other features
The keras.applications module The keras.datasets module
An end-to-end Sequential example
Estimators Evaluating TensorFlow graphs
Lazy loading versus eager execution
Summary
Section 2: TensorFlow 2.0 - Data and Model Training Pipelines Designing and Constructing Input Data Pipelines
Technical requirements Designing and constructing the data pipeline
Raw data Splitting data into train, validation, and test data Creating TFRecords
TensorFlow protocol messages – tf.Example
tf.data dataset object creation
Creating dataset objects Creating datasets using TFRecords Creating datasets using in-memory objects and tensors Creating datasets using other formats directly without using TFRecords
Transforming datasets
The map function The flat_map function The zip function The concatenate function The interleave function The take(count) function The filter(predicate) function Shuffling and repeating the use of tf.data.Dataset Batching Prefetching Validating your data pipeline output before feeding it to the model
Feeding the created dataset to the model Examples of complete end-to-end data pipelines
Creating tfrecords using pickle files
Best practices and the performance optimization of a data pipeline in TF 2.0  Built-in datasets in TF 2.0 Summary Further reading
Model Training and Use of TensorBoard
Technical requirements Comparing Keras and tf.keras
Comparing estimator and tf.keras A quick review of machine learning taxonomy and TF support
Creating models using tf.keras 2.0
Sequential APIs Functional APIs Model subclassing APIs
Model compilation and training
The compile() API The fit() API Saving and restoring a model
Saving checkpoints as the training progresses Manually saving and restoring weights Saving and restoring an entire model
Custom training logic Distributed training TensorBoard
Hooking up TensorBoard with callbacks and invocation Visualization of scalar, metrics, tensors, and image data Graph dashboard Hyperparameter tuning What-If Tool Profiling tool
Summary Questions Further reading
Section 3: TensorFlow 2.0 - Model Inference and Deployment and AIY Model Inference Pipelines - Multi-platform Deployments
Technical requirements Machine learning workflow – the inference phase
Understanding a model from an inference perspective
Model artifact – the SavedModel format
Understanding the core dataflow model The tf.function API
The tf.autograph function
Exporting your own SavedModel model
Using the tf.function API
Analyzing SavedModel artifacts
The SavedModel command-line interface
Inference on backend servers
TensorFlow Serving
Setting up TensorFlow Serving Setting up and running an inference server
When TensorFlow.js meets Node.js
Inference in the browser Inference on mobile and IoT devices Summary
AIY Projects and TensorFlow Lite
Introduction to TFLite Getting started with TFLite Running TFLite on mobile devices
TFLite on Android TFLite on iOS
Running TFLite on low-power machines
Running TFLite on an Edge TPU processor Running TF on the NVIDIA Jetson Nano
Comparing TFLite and TF AIY
The Voice Kit The Vision Kit
Summary
Section 4: TensorFlow 2.0 - Migration, Summary Migrating From TensorFlow 1.x to 2.0
Major changes in TF 2.0 Recommended techniques to employ for idiomatic TF 2.0 Making code TF 2.0-native
Converting TF 1.x models Upgrading training loops Other things to note when converting
Frequently asked questions The future of TF 2.0
More resources to look at
Summary
Other Books You May Enjoy
Leave a review - let other readers know what you think
  • ← Prev
  • Back
  • Next →
  • ← Prev
  • Back
  • Next →

Chief Librarian: Las Zenow <zenow@riseup.net>
Fork the source code from gitlab
.

This is a mirror of the Tor onion service:
http://kx5thpx2olielkihfyo4jgjqfb7zx7wxr3sd4xzt26ochei4m6f7tayd.onion