Chapter 9. Troubleshooting Guide

We have come so far and I am sure you are enjoying each and every moment of this challenging and joyful learning journey. I will not say that this book ends after this chapter, but rather you are completing the first milestone. This milestone opens the doors for learning and implementing a new paradigm in the cloud with microservice-based design. I would like to reaffirm that integration testing is an important way to test interaction among microservices and APIs. While working on your sample app Online Table Reservation System (OTRS), I am sure you faced many challenges, especially while debugging the app. Here, we will cover a few of the practices and tools that will help you to troubleshoot the deployed application, Docker containers, and host machines.

This chapter covers the following three topics:

Can you imagine debugging any issue without seeing a log on the production system? Simply, no, as it would be difficult to go back in time. Therefore, we need logging. Logs also give us warning signals about the system if they are designed and coded that way. Logging and log analysis is an important step for troubleshooting any issue, and also for throughput, capacity, and monitoring the health of the system. Therefore, having a very good logging platform and strategy will enable effective debugging. Logging is one of the most important key components of software development in the initial days.

Microservices are generally deployed using image containers like Docker that provide the log with commands that help you to read logs of services deployed inside the containers. Docker and Docker Compose provide commands to stream the log output of running services within the container and in all containers respectively. Please refer to the following logs command of Docker and Docker Compose:

These commands help you to explore the logs of microservices and other processes running inside the containers. As you can see, using the above commands would be a challenging task when you have a higher number of services. For example, if you have 10s or 100s of microservices, it would be very difficult to track each microservice log. Similarly, you can imagine, even without containers, how difficult it would be to monitor logs individually. Therefore, you can assume the difficulty of exploring and correlating the logs of 10s to 100s of containers. It is time-consuming and adds very little value.

Therefore, a log aggregator and visualizing tools like the ELK stack come to our rescue. It will be used for centralizing logging. We'll explore this in the next section.

The Elasticsearch, Logstash, Kibana (ELK) stack is a chain of tools that performs log aggregation, analysis, visualization, and monitoring. The ELK stack provides a complete logging platform that allows you to analyze, visualize, and monitor all your logs, including all types of product logs and system logs. If you already know about the ELK stack, please skip to the next section. Here, we'll provide a brief introduction to each tool in the ELK Stack.

Generally, these tools are installed individually and then configured to communicate with each other. The installation of these components is pretty straight forward. Download the installable artifact from the designated location and follow the installation steps as shown in the next section.

The installation steps provided below are part of a basic setup, which is required for setting up the ELK stack you want to run. Since this installation was done on my localhost machine, I have used the host localhost. It can be changed easily with any respective host name that you want.

We can install Elasticsearch by following these steps:

  1. Download the latest Elasticsearch distribution from https://www.elastic.co/downloads/elasticsearch.
  2. Unzip it to the desired location in your system.
  3. Make sure the latest Java version is installed and the JAVA_HOME environment variable is set.
  4. Go to Elasticsearch home and run bin/elasticsearch on Unix-based systems and bin/elasticsearch.bat on Windows.
  5. Open any browser and hit http://localhost:9200/. On successful installation it should provide you a JSON object similar to that shown as follows:
    {
      "name" : "Leech",
      "cluster_name" : "elasticsearch",
      "version" : {
        "number" : "2.3.1",
        "build_hash" : "bd980929010aef404e7cb0843e61d0665269fc39",
        "build_timestamp" : "2016-04-04T12:25:05Z",
        "build_snapshot" : false,
        "lucene_version" : "5.5.0"
      },
      "tagline" : "You Know, for Search"
    }

    By default, the GUI is not installed. You can install one by executing the following command from the bin directory; make sure the system is connected to the Internet:

    plugin -install mobz/elasticsearch-head
    
  6. Now you can access the GUI interface with the URL http://localhost:9200/_plugin/head/.

    You can replace localhost and 9200 with your respective hostname and port number.

We can install Logstash by following the given steps:

We can install the Kibana web application by following the given steps:

As we followed the above steps, you may have noticed that it requires some amount of effort. If you want to avoid a manual setup, you can Dockerize it. If you don't want to put effort into creating the Docker container of the ELK stack, you can choose one from Docker Hub. On Docker Hub there are many ready-made ELK stack Docker images. You can try different ELK containers and choose the one that suits you the most. willdurand/elk is the most downloaded container and is easy to start, working well with Docker Compose.