© Orhan Gazi Yalçın 2021
O. G. YalçınApplied Neural Networks with TensorFlow 2https://doi.org/10.1007/978-1-4842-6513-0_5

5. A Guide to TensorFlow 2.0 and Deep Learning Pipeline

Orhan Gazi Yalçın1  
(1)
Istanbul, Turkey
 
In the previous chapters, we cover the fundamentals before diving into deep learning applications.
  • Chapter 1 is useful to understand the reasons behind the selection of technologies such as Python and TensorFlow. It also helped us setting up our environments.

  • Chapter 2 makes a brief introduction to machine learning since deep learning is a subfield of machine learning.

  • In Chapter 3, we finally cover the basics of deep learning. These three chapters were conceptual and introductory chapters.

  • Chapter 4 summarizes all the technologies we use in our deep learning pipeline, except for one: TensorFlow.

In this chapter, we cover the basics of TensorFlow and the API references that we use in this book.

TensorFlow Basics

The main focus of this chapter is how we can use TensorFlow for neural networks and model training, but first, we need to cover a few topics under TensorFlow Basics, which are
  • Eager execution vs. graph execution

  • TensorFlow constants

  • TensorFlow variables

Eager Execution

One of the novelties brought with TensorFlow 2.0 was to make the eager execution the default option. With eager execution, TensorFlow calculates the values of tensors as they occur in your code. Eager execution simplifies the model building experience in TensorFlow, and you can see the result of a TensorFlow operation instantly.

The main motivation behind this change of heart was PyTorch’s dynamic computational graph capability. With the dynamic computational graph capability, PyTorch users were able to follow define-by-run approach, in which you can see the result of an operation instantly.

However, with graph execution, TensorFlow 1.x followed a define-and-run approach, in which evaluation happens only after we’ve wrapped our code with tf.Session. Graph execution has advantages for distributed training, performance optimizations, and production deployment. But graph execution also drove the newcomers away to PyTorch due to the difficulty of implementation. Therefore, this difficulty for newcomers led the TensorFlow team to adopt eager execution, TensorFlow’s define-by-run approach, as the default execution method.

In this book, we only use the default eager execution for model building and training .

Tensor

Tensors are TensorFlow’s built-in multidimensional arrays with uniform type. They are very similar to NumPy arrays, and they are immutable, which means that once created, they cannot be altered, and you can only create a new copy with the edits.

Tensors are categorized based on the number of dimensions they have:
  • Rank-0 (Scalar) Tensor: A tensor containing a single value and no axes

  • Rank-1 Tensor: A tensor containing a list of values in a single axis

  • Rank-2 Tensor: A tensor containing two axes

  • Rank-N Tensor: A tensor containing N-axis

For example, a Rank-3 Tensor can be created and printed out with the following lines:
rank_3_tensor = tf.constant([
  [[0, 1, 2, 3, 4],
   [5, 6, 7, 8, 9]],
  [[10, 11, 12, 13, 14],
   [15, 16, 17, 18, 19]],
  [[20, 21, 22, 23, 24],
   [25, 26, 27, 28, 29]],])
print(rank_3_tensor)
You can access detailed information about the tf.Tensor object with the following functions:
print("Type of every element:", rank_3_tensor.dtype)
print("Number of dimensions:", rank_3_tensor.ndim)
print("Shape of tensor:", rank_3_tensor.shape)
print("Elements along axis 0 of tensor:", rank_3_tensor.shape[0])
print("Elements along the last axis of tensor:", rank_3_tensor.shape[-1])
print("Total number of elements (3*2*5): ", tf.size(rank_3_tensor).numpy())
Output:
Type of every element: <dtype: 'int32'>
Number of dimensions: 3
Shape of tensor: (3, 2, 5)
Elements along axis 0 of tensor: 3
Elements along the last axis of tensor: 5
Total number of elements (3*2*5):  30
There are several functions that create a Tensor object. Other than tf.Constant() , we often use tf.ones() and tf.zeros() functions to create tensors with only ones or zeros of given size. The following lines provide example for both:
zeros = tf.zeros(shape=[2,3])
print(zeros)
Output:
tf.Tensor(
[[0. 0. 0.]
 [0. 0. 0.]], shape=(2, 3), dtype=float32)
ones = tf.ones(shape=[2,3])
print(ones)
Output:
tf.Tensor(
[[1. 1. 1.]
 [1. 1. 1.]], shape=(2, 3), dtype=float32)
The base tf.Tensor class requires tensors to be in a rectangular shape, which means along each axis, every element has the same size. However, there are specialized types of tensors that can handle different shapes :
  • Ragged Tensors : A tensor with variable numbers of elements along some axis

  • Sparse Tensors : A tensor where our data is sparse, like a very wide embedding space

Variable

A TensorFlow variable is the recommended way to represent a shared, persistent state that you can manipulate with a model. TensorFlow variables are recorded as a tf.Variable object. A tf.Variable object represents a tensor whose values can be changed, as opposed to plain TensorFlow constants. tf.Variable objects are used to store model parameters.

TensorFlow variables are very similar to TensorFlow constants, with one significant difference: variables are mutable. So, the values of a variable object can be altered (e.g., with assign() function) as well as the shape of the variable object (e.g., with reshape() function).

You can create a basic variable with the following code:
a = tf.Variable([2.0, 3.0])
You can also use an existing constant to create a variable:
my_tensor = tf.constant([[1.0, 2.0], [3.0, 4.0]])
my_variable = tf.Variable(my_tensor)
print(my_variable)
Output:
<tf.Variable 'Variable:0' shape=(2, 2) dtype=float32, numpy=
array([[1., 2.],
       [3., 4.]], dtype=float32)>
You can convert a TensorFlow variable object or a TensorFlow tensor object to a NumPy array with the tensor.numpy() function , as shown here:
my_variable.numpy()
my_tensor.numpy()

These are some of the fundamental concepts in TensorFlow. Now we can move on to the model building and data processing with TensorFlow .

TensorFlow Deep Learning Pipeline

In the last section of Chapter 2, we listed the steps of a complete machine learning pipeline (i.e., steps to obtain a trained machine learning model). In deep learning models, we almost exclusively use the same pipeline, in which there is a great deal of work for TensorFlow. Figure 5-1 shows how our pipeline works (please note that you may encounter slight alterations in different sources).
../images/501289_1_En_5_Chapter/501289_1_En_5_Fig1_HTML.jpg
Figure 5-1

Deep Learning Pipeline Built with TensorFlow

In the next sections, we cover these steps with code examples. Please note that the data gathering step will be omitted since it is usually regarded as a separate task and not usually performed by machine learning experts.

Data Loading and Preparation

Before building and training a neural network, the first step to deep learning is to load your data, process it, and feed it to the neural network. All the neural networks we cover in the next chapters require data, and for this, we need to feed the data in the right format. Our TensorFlow model accepts several object types, which can be listed as follows:
  • TensorFlow Dataset object

  • TensorFlow Datasets catalog

  • NumPy array object

  • Pandas DataFrame object

Let’s dive into how we can use them.

Dataset Object (tf.data.Dataset)

TensorFlow Dataset object represents a large set of elements (i.e., a dataset). tf.data.Dataset API is one of the objects TensorFlow accepts as model input used for training and specifically designed for input pipelines.

You can use the Dataset API with the following purposes:
  • Create a dataset from the given data.

  • Transform the dataset with collective functions such as map.

  • Iterate over the dataset and process individual elements.

Dataset API supports various file formats and Python objects, which can be used to create tf.data.Dataset objects. Let’s take a look at some of these supported file formats and objects:
  • Dataset from a Python list, NumPy array, Pandas DataFrame with from_tensor_slices function

ds = tf.data.Dataset.from_tensor_slices([1, 2, 3])
ds = tf.data.Dataset.from_tensor_slices(numpy_array)
ds = tf.data.Dataset.from_tensor_slices(df.values)
  • Dataset from a text file with TextLineDataset function

ds = tf.data.TextLineDataset("file.txt")
  • Dataset from TensorFlow’s TFRecord format with TFRecordDataset function

ds = tf.data.TFRecordDataset("file.tfrecord")
  • Dataset from CSV file with

ds = tf.data.experimental.make_csv_dataset( "file.csv", batch_size=5)
  • Dataset from TensorFlow Datasets catalog: This will be covered in the next section.

TensorFlow Datasets Catalog

TensorFlow Datasets is a collection of popular datasets that are maintained by TensorFlow. They are usually clean and ready to use.

Installation

TensorFlow Datasets exists in two packages:
  • tensorflow-datasets: The stable version, updated every few months

  • tfds-nightly: The nightly released version, which contains the latest versions of the datasets

As you can understand from the names, you may use either the stable version, which updates less frequently but is more reliable, or the nightly released version, which gives access to the latest versions of the dataset. But beware that because of the frequent releases, tfds-nightly is more prone to breaking and, thus, is not recommended to be used in production-level projects.

In case you don’t have it on your systems, you may install these packages with the following command-line scripts:
pip install tensorflow_datasets
pip install tfds-nightly

Importing

These packages are loaded via tensorflow_datasets, which is usually abbreviated as tfds. To be able to import these packages, all you have to do is to run a single line, as shown here:
import tensorflow_datasets as tfds

Datasets Catalog

After the main library is imported, we can use load function to import one of the popular libraries listed in the TensorFlow Datasets catalog page, which is accessible on

www.tensorflow.org/datasets/catalog/overview

Under this catalog, you may find dozens of datasets, which belong to one of these listed groups:
  • Audio

  • Image

  • Image classification

  • Object detection

  • Question answering

  • Structured

  • Summarization

  • Text

  • Translate

  • Video

Loading a Dataset

The easiest way to load a dataset from TensorFlow Datasets catalog is to use the load function. This function will
  • Download the dataset.

  • Save it as TFRecord files.

  • Load the TFRecord files to your notebook.

  • Create a tf.data.Dataset object, which can be used to train a model.

The following example shows how to load a dataset with the load function:
mnist_dataset = tfds.load(‘mnist’, split=‘train’)
You can customize your loading process by setting particular arguments of your load function:
  • split: Controls which part of the dataset to load

  • shuffle_files: Controls whether to shuffle the files between each epoch

  • data_dir: Controls the location where the dataset is saved

  • with_info: Controls whether the DatasetInfo object will be loaded or not

In our upcoming sections, we take advantage of this catalog to a great extent.

Keras Datasets

In addition to the TensorFlow Datasets catalog, Keras also provides access to a limited number of datasets that are listed in their catalog, accessible on https://keras.io/api/datasets/. The datasets accessible under this catalog are
  • MNIST

  • CIFAR10

  • CIFAR100

  • IMDB Movie Reviews

  • Reuters Newswire

  • Fashion MNIST

  • Boston Housing

As you can see, this catalog is very limited, but come in handy in your research projects.

You can load a dataset from Keras API with the load_data() function , as shown here:
(x_train, y_train), (x_test, y_test)= tf.keras.datasets.mnist.load_data( path="mnist.npz" )

One important difference of Keras’s datasets from TensorFlow’s datasets is that they are imported as NumPy array objects.

NumPy Array

One of the data types which are accepted by TensorFlow as input data is NumPy arrays. As mentioned in the previous chapter, you can import NumPy library with the following line:
import numpy as np

You can create a NumPy array with np.array() function, which can be fed into the TensorFlow model. You can also use a function such as np.genfromtxt() to load a dataset from a CSV file.

In reality, we rarely use a NumPy function to load data. For this task, we often take advantage of the Pandas library, which acts almost as a NumPy extension.

Pandas DataFrame

Pandas DataFrame and Series objects are also accepted by TensorFlow as well as NumPy arrays. There is a strong connection between Pandas and NumPy. To process and clean our data, Pandas often provides more powerful functionalities. However, NumPy arrays are usually more efficient and recognized by other libraries to a greater extent. For example, you may need to use scikit-learn to preprocess your data. Scikit-learn would accept a Pandas DataFrame as well as a NumPy array, but only returns a NumPy array. Therefore, a machine learning expert must learn to use both libraries.

You may import the Pandas libraries, as shown here:
import pandas as pd
You can easily load datasets from the files in different formats such as CSV, Excel, and text, as shown here:
  • CSV Files: pd.read_csv("path/xyz.csv")

  • Excel Files: pd.read_excel("path/xyz.xlsx")

  • Text Files: pd.read_csv("path/xyz.txt")

  • HTML Files: pd.read_html("path/xyz.html") or pd.read_html('URL')

After loading your dataset from these different file formats, Pandas gives you an impressive number of different functionalities, and you can also check the result of your data processing operation with pandas.DataFrame.head() or pandas.DataFrame.tail() functions.

Other Objects

The number of supported file formats is increasing with the new versions of TensorFlow. In addition, TensorFlow I/O is an extension library that extends the number of supported libraries even further with its API. Although the supported objects and file formats we covered earlier are more than enough, if you are interested in other formats, you may visit TensorFlow I/O’s official GitHub repository at

TensorFlow I/O: https://github.com/tensorflow/io#tensorflow-io

Model Building

After loading and processing the dataset, the next step is to build a deep learning model to train. We have two major options to build models:
  • Keras API

  • Estimator API

In this book, we only use Keras API and, therefore, focus on the different ways of building models with Keras API.

Keras API

As mentioned in the earlier chapters, Keras acts as a complementary library to TensorFlow. Also, TensorFlow with version 2.0 adopted Keras as a built-in API to build models and for additional functionalities.

Keras API under TensorFlow 2.x provides three different methods to implement neural network models:
  • Sequential API

  • Functional API

  • Model Subclassing

Let’s take a look at each method in the following.

Sequential API

The Keras Sequential API allows you to build a neural network step-by-step fashion. You can create a Sequential() model object, and you can add a layer at each line.

Using the Keras Sequential API is the easiest method to build models which comes at a cost: limited customization . Although you can build a Sequential model within seconds, Sequential models do not provide some of the functionalities such as (i) layer sharing, (ii) multiple branches, (iii) multiple inputs, and (iv) multiple outputs. A Sequential model is the best option when we have a plain stack of layers with one input tensor and one output tensor.

Using the Keras Sequential API is the most basic method to build neural networks, which is sufficient for many of the upcoming chapters. But, to build more complex models, we need to use Keras Functional API and Model Subclassing options.

Building a basic feedforward neural network with the Keras Sequential API can be achieved with the following lines:
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(128,'relu'))
model.add(Dense(10, "softmax"))
Alternatively, we may just pass a list of layers to the Sequential constructor:
model = Sequential([
    Flatten(input_shape=(28, 28)),
    Dense(128,'relu'),
    Dense(10, "softmax"),
  ])

Once a Sequential model is built, it behaves like a Functional API model, which provides an input attribute and an output attribute for each layer.

During our case studies, we take advantage of other attributes and functions such as model.layers and model.summary() to understand the structure of our neural network.

Functional API

The Keras Functional API is a more robust and slightly more complex API to build powerful neural networks with TensorFlow. The models we create with the Keras Functional API is inherently more flexible than the models we create with the Keras Sequential API. They can handle nonlinear topology, share layers, and can have multiple branches, inputs, and outputs.

The Keras Functional API methodology stems from the fact that most neural networks are directed acyclic graph (DAG) of layers. Therefore, the Keras team develops Keras Functional API to design this structure. The Keras Functional API is a good way to build graphs of layers.

To create a neural network with the Keras Functional API, we create an input layer and connect it to the first layer. The next layer is connected to the previous one and so on and so forth. Finally, a Model object takes the input and the connected stack of layers as parameters.

The example model in the Keras Sequential API may be constructed using the Keras Functional API as follows:
inputs = tf.keras.Input(shape=(28, 28))
x = Flatten()(inputs)
x = Dense(128, "relu")(x)
outputs = Dense(10, "softmax")(x)
model = tf.keras.Model(inputs=inputs,
                        outputs=outputs,
                        name="mnist_model")
Just as in the Keras Sequential API, we can use layers attribute, summary() function . In addition, we may also plot the model as a graph with the following line:
tf.keras.utils.plot_model(model)

Model and Layer Subclassing

Model Subclassing is the most advanced Keras method, which gives us unlimited flexibility to build a neural network from scratch. You can also use Layer Subclassing to build custom layers (i.e., the building blocks of a model) which you can use in a neural network model.

With Model Subclassing, we can build custom-made neural networks to train. Inside of Keras Model class is the root class used to define a model architecture.

The upside of the Model Subclassing is that it’s fully customizable, whereas its downside is the difficulty of implementation. Therefore, if you are trying to build exotic neural networks or conducting research-level studies, the Model Subclassing method is the way to go. However, if you can do your project with the Keras Sequential API or the Keras Functional API, you should not bother yourself with the Model Subclassing.

The preceding example you see can be rewritten with Model Subclassing as follows:
class CustomModel(tf.keras.Model):
  def __init__(self, **kwargs):
    super(CustomModel, self).__init__(**kwargs)
    self.layer_1 = Flatten()
    self.layer_2 = Dense(128, "relu")
    self.layer_3 = Dense(10, "softmax")
  def call(self, inputs):
    x = self.layer_1(inputs)
    x = self.layer_2(x)
    x = self.layer_3(x)
    return x
model = CustomModel(name=' mnist_model')
There are two crucial functions in Model Subclassing:
  • the __init__ function acts as a constructor. Thanks to __init__, we can initialize the attributes (e.g., layers) of our model.

  • the super function is used to call the parent constructor (tf.keras.Model).

  • the self object is used to refer to instance attributes (e.g., layers).

  • the call function is where the operations are defined after the layers are defined in the __init__ function.

In the preceding example, we defined our Dense layers under the __init__ function, then created them as objects, and built our model similar to how we build a neural network using the Keras Functional API. But note that you can build your model in Model Subclassing however you want.

We can complete our model building by generating an object using our custom class (custom model) as follows:
model = CustomModel(name='mnist_model')

We use Model Subclassing in Chapters 10 and 11.

Estimator API

Estimator API is a high-level TensorFlow API, which encapsulates the following functionalities:
  • Training

  • Evaluation

  • Prediction

  • Export for serving

We can take advantage of various premade Estimators as well as we can write our own model with the Estimator API. Estimator API has a few advantages over Keras APIs, such as parameter server-based training and full TFX integration. However, Keras APIs will soon become capable of these functionalities, which makes Estimator API optional.

This book does not cover the Estimator API in case studies. Therefore , we don’t go into the details. But, if you are interested in learning more about the Estimator API, please visit the TensorFlow’s Guide on Estimator API at www.tensorflow.org/guide/estimator.

Compiling, Training, and Evaluating the Model and Making Predictions

Compiling is an import part of the deep learning model training where we define our (i) optimizer, (ii) loss function, and other parameters such as (iii) callbacks. Training, on the other hand, is the step we start feeding input data into our model so that the model can learn to infer patterns hidden in our dataset. Evaluating is the step where we check our model for common deep learning issues such as overfitting.

There are two methods to compile, train, and evaluate our model:
  • Using the standard method

  • Writing a custom training loop

The Standard Method

When we follow the standard training method, we can benefit from the following functions:
  • model.compile()

  • model.fit()

  • model.evaluate()

  • model.predict()

model.compile( )

model.compile() is the function we set our optimizer, loss function, and performance metrics before training. It is a very straightforward step that can be achieved with a single line of code. Also, note that there are two ways to pass loss function and optimizer arguments within the model.compile() function, which may be exemplified as follows:
  • Option 1: Passing arguments as a string

model.compile(
optimizer='adam',
loss=mse,
metrics=['accuracy'])
  • Option 2: Passing arguments as a TensorFlow object

model.compile(
optimizer=tf.keras.optimizers.Adam() ,
loss=tf.keras.losses.MSE(),
metrics=[tf.keras.metrics.Accuracy()])

Passing loss function, metrics, and optimizer as object offers more flexibility than Option 1 since we can also set arguments within the object.

Optimizer
The optimizer algorithms supported by TensorFlow are as follows:
  • Adadelta

  • Adagrad

  • Adam

  • Adamax

  • Ftrl

  • Nadam

  • RMSProp

  • SGD

The up-to-date list can be found at this URL:

www.tensorflow.org/api_docs/python/tf/keras/optimizers

You may select an optimizer via tf.keras.optimizers module.

Loss Function

Another important argument, which must be set, before starting the training is the loss function. tf.keras.losses module supports a number of loss functions suitable for classification and regression tasks. The entire list can be found at this URL:

www.tensorflow.org/api_docs/python/tf/keras/losses

model.fit( )

model.fit() trains the model for a fixed number of epochs (iterations on a dataset). It takes several arguments such as epochs, callbacks, and shuffle, but it must also take another argument: our data. Depending on the problem, this data might be (i) features only or (ii) features and labels. An example usage of the model.fit() function is as follows:
model.fit(train_x, train_y, epochs=50)

model.evaluate( )

The model.evaluate() function returns the loss value and metrics values for the model using test dataset. What it returns and the accepted arguments are similar to the model.fit() function, but it does not train the model any further.
model.evaluate(test_x, test_y)

model.predict( )

model.predict() is the function we use to make single predictions. While the model.evaluate() function requires labels, the model.predict() function does not require labels. It just makes predictions using the trained model, as shown here:
model.evaluate(sample_x)

Custom Training

Instead of following standard training options which allows you to use functions such as model.compile(), model.fit(), model.evaluate(), and model.predict(), you can fully customize this process.

To be able to define a custom training loop, you have to use a tf.GradientTape(). tf.GradientTape() records operations for automatic differentiation, which is very useful for implementing machine learning algorithms such as backpropagation during training. In other words, tf.GradientTape() allows us to track TensorFlow computations and calculate gradients.

For custom training, we follow these steps:
  • Set optimizer, loss function, and metrics.

  • Run a for loop for the number of epochs.
    • Run a nested loop for each batch of each epoch:

    • Work with tf.GradientTape() to calculate and record loss and to conduct backpropagation.

    • Run the optimizer.

    • Calculate, record, and print out metric results.

The following lines show an example of the standard method for training. Just in two lines, you can configure and train your model.
model.compile(optimizer=Adam(), loss=SCC(from_logits=True), metrics=[SCA()])
model.fit(x_train, y_train, epochs=epochs)
The following lines, on the other hand, show how you can achieve the same results with a custom training loop.
# Instantiate optimizer, loss, and metric
optimizer, loss_fn, accuracy = Adam(), SCC(from_logits=True), SCA()
# Convert NumPy to TF Dataset object
train_dataset = (Dataset.from_tensor_slices((x_train, y_train)).shuffle(buffer_size=1024).batch(batch_size=64))
for epoch in range(epochs):
    # Iterate over the batches of the dataset.
    for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
        # Open a GradientTape to record the operations, which enables auto-differentiation.
        with tf.GradientTape() as tape:
            # The operations that the layer applies to its inputs are going to be recorded
            logits = model(x_batch_train, training=True)
            loss_value = loss_fn(y_batch_train, logits)
        # Use the tape to automatically retrieve the gradients of the trainable variables
        grads = tape.gradient(loss_value, model.trainable_weights)
        # Run one step of gradient descent by updating
        # the value of the variables to minimize the loss.
        optimizer.apply_gradients(zip(grads, model.trainable_weights))
    # Metrics related part
        accuracy.update_state(y_batch_train, logits)
        if step % int(len(train_dataset)/5) == 0: #Print out
          print(step, "/", len(train_dataset)," | ",end="")
    print("\rFor Epoch %.0f, Accuracy: %.4f" % (epoch+1, float(accuracy.result()),))
    accuracy.reset_states()

As you can see, it is much more complicated, and therefore, you should only use custom training when it is absolutely necessary. You may also customize the individual training step, model.evaluate() function , and even model.predict() function . Therefore, TensorFlow almost always provide enough flexibility for researchers and custom model developers. In this book, we take advantage of custom training in Chapter 12.

Saving and Loading the Model

We just learned how to build a neural network, and this information will be crucial for the case studies in the upcoming chapters. But we also would like to use the models we trained in real-world applications. Therefore, we need to save our model so that it can be reused.

We can save an entire model to a single artifact. When we saved the entire model, it contains
  • The model’s architecture and configuration data

  • The model’s optimized weights

  • The model’s compilation information (model.compile() info)

  • The optimizer and its latest state

TensorFlow provides two formats to save models:
  • TensorFlow SavedModel Format

  • Keras HDF5 (or H5) Format

Although the old HDF5 format was quite popular previously, SavedModel has become the recommended format to save models in TensorFlow. The key difference between HDF5 and SavedModel is that HDF5 uses object configs to save the model architecture, while SavedModel saves the execution graph. The practical consequence of this difference is significant. SavedModels can save custom objects such as models that are built with Model Subclassing or custom-built layers without the original code. To be able to save the custom objects in HDF5 format, there are extra steps involved, which makes HDF5 less appealing.

Saving the Model

Saving the model in one of these formats is very easy. The desired format can be selected with an argument (save_format) passed in the model.save() function.

To save a model in the SavedModel format, we can use the following line:
model.save("My_SavedModel")
If you’d like to save your model in the HDF5 format, we can simply use the same function with the save_format argument:
model.save("My_H5Model", save_format="h5")
For the HDF5 format, you can alternatively use Keras’s save_model() function , as shown here:
tf.keras.models.save_model("My_H5Model")

After saving the model, the files containing the model can be found in your temporary Google Colab directory.

Loading the Model

After you saved the model files, you can easily load and reconstruct the model you saved before. To load our model, we make use of the load_model() function offered by Keras API. The following lines can be used to load a model saved in either format:
import tensorflow as tf
reconstructed_model = tf.keras.models.load_model( 'My_SavedModel' )
You can use the loaded model just like you use a model you trained. The following lines are the exact copy of the model.evaluate() function used for the freshly trained model:
test_loss, test_acc = reconstructed_model.evaluate( x_test,  y_test, verbose=2)
print('\nTest accuracy:', test_acc)

Conclusion

Now that we covered how we can use TensorFlow for our deep learning pipeline along with some TensorFlow basics, we can start covering different types of neural network concepts along with their corresponding case studies. Our first neural network type is feedforward neural networks or, in other words, multilayer perceptron.