The goal of the optimizer is to minimize loss, and it does this by adjusting the different weights that we have in all of the layers of our network.
The following screenshot shows the lines of code used for defining the optimizer and shows the training operations:
In this case, the optimizer is, again, the Adam optimizer with a learning rate of 0.001. The training operation is the operation in which the optimizer minimizes the loss.