Summary

This chapter demonstrated how to prepare API services for deployment in production. We need a web proxy server, application server, and a process monitor for deployment.

Nginx is a web proxy server that can pass requests to multiple servers running on the same host or on a different host.

We learned how to install Nginx and start configuring it. Nginx provides features such as load balancing and rate limiting, which are very important features for APIs to have. Load balancing is the process of distributing loads among similar servers. We explored all the available types of loading mechanisms: Round Robin, IP Hash, Least Connection, and more. Then, we looked at how to add access control to our servers by allowing and denying a few sets of IP addresses. We have to add rules in the Nginx server blocks to achieve that.

Finally, we saw a process monitor named Supervisord that brings a crashed application back to life. We saw how to install Supervisord and also launch supervisorctl, a command-line application to control running servers. We then tried to automate the deployment process by creating a Makefile and docker-compose file. We also explored how to containerize a Go application along with Nginx using Docker and Docker Compose. In the real world, containers are the preferable way to deploy software.

In the next chapter, we are going to demonstrate how to make our REST services publicly visible with the help of AWS EC2 and Amazon API Gateway.