Synchronous vs. asynchronous HTTP
The Servlet 3.0 spec for asynchronous HTTP
Code to demonstrate asynchronous HTTP between browser and microservice
Code to demonstrate asynchronous HTTP between microservices
The Google Protocol Buffer
Code to demonstrate using the Google Protocol Buffer between microservices
Communication Across the Outer Architecture
In the traditional monolith, the multiple components that form the building blocks of a service realization get executed in a single container process. Method invocations or messaging between these components are “local” or “in process.” This will allow processes, whether a container-managed process or a simple Java application process, to make a lot of assumptions and optimizations and complete the execution. However, when you rearchitect the same service from the monolith into the microservices architecture, the implementation gets split and deployed into more than one process. This is shown in Figure 10-1.
Interprocess communications are heavyweight compared to communications within a process. They involve more context switches, network wait and I/O operations, data marshalling and unmarshalling, etc. This increases the complexity involved in the outer architecture space of a microservices architecture.
Further, every microservice involves socket listeners listening to incoming traffic. So, compared to a traditional monolith, microservices architecture involves many more socket listeners, associated threads to process incoming traffic, and so on. A look into the details of a few of these handpicked concerns and analyzing the options to address them will give you a fair enough idea of the whole landscape.
Asynchronous HTTP
You briefly looked at distributed messaging in Chapter 6, where you also appreciated the differences between the synchronous and asynchronous style of communications between microservices. You saw that messaging is a first-class technology by which microservices can communicate asynchronously, and you saw via examples how you can correlate many asynchronous message transmissions so as to weave them together as pieces of a single, end-to-end request/response cycle. However, all of the samples in Chapter 8 used the HTTP protocol for intercommunications between microservices. You will look at the nature of these communications more closely in this section.
The Bad and the Ugly Part of HTTP
As stated, all of the samples in Chapter 8 use the HTTP protocol for intercommunications between microservices. If you examine these interactions again, you can observe that most of them are synchronous HTTP calls. Close observation of these communications reveals that an incoming request from a consumer microservice to your provider microservice server will capture one servlet connection and perform a blocking call to the remote service before it can send a response to the consumer microservice. It works, but it does not scale effectively if you have many such concurrent clients. Synchronous HTTP is a best fit for a user using a client application to get instant feedback or a response; however, that is not the case when it comes to server-side processing.
APIs for Asynchronous HTTP Processing
Typical containers in application servers like in Apache Tomcat normally use a server thread per client request. Under increased load conditions, this necessitates containers to have a large amount of threads to serve all the client requests. This limits scalability, which can arise due to running out of memory or exhausting the pool of container threads. Java EE has added asynchronous processing support for servlets and filters from the Servlet 3.0 spec onwards. A servlet or a filter, on reaching a potentially blocking operation when processing a request, can assign the operation to an asynchronous execution context and return the thread associated with the request immediately to the container without waiting to generate a response. The blocking operation can later complete in the asynchronous execution context in some different thread, which can generate a response or dispatch the request to another servlet. The javax.servlet.AsyncContext class is the API that provides the functionality that you need to perform asynchronous processing inside service methods. The startAsync() method on the HttpServletRequest object of your service method puts the request into asynchronous mode and ensures that the response is not committed even after exiting the service method. You have to generate the response in the asynchronous context after the blocking operation completes or dispatch the request to another servlet.
Pseudo Code for a Sync Controller
Pseudo Code for an Async Controller Using Callable
Spring 3.2 introduced org.springframework.web.context.request.async.DeferredResult, which can again be returned by a Controller class. DeferredResult provides an alternative to using a Callable for asynchronous request processing. While a Callable is executed concurrently on behalf of the application, with a DeferredResult the application can produce the result from a thread of its choice. Here, the processing can happen in a thread not known to Spring MVC, similar to the case when you need to get a response from a messaging channel, and so on. So, when you use DeferredResult, even after you leave the controller handler method, the request processing is not done. Instead, Spring MVC (using Servlet 3.0 capabilities) will hold on with the response, keeping the idle HTTP connection. Even though the HTTP Worker Thread is no longer used, the HTTP connection is still open. At a later point in time, some other thread will resolve DeferredResult by assigning some value to it. Spring MVC will immediately pick up this event and send a response to the client, finishing the request processing.
Pseudo Code for an Async Controller Using DeferredResult
Pseudo Code for an Async Controller using CompletableFuture
Design a Scenario to Demo Async HTTP Between Microservices
You will use a trimmed down version of the same components used for previous examples. So your sample here will consist of three main components: an HTML-based client app and two microservices, as shown in Figure 10-2. I have removed all complexities of HATEOAS and any data repositories so that you can concentrate on async HTTP alone. Once you appreciate the design and the main components used to implement it, then you should be able to use a similar pattern for your other complex business scenarios.
Let’s walk through the design in more detail by referring to the code snippets so that the concepts will be more clear.
Code to Use Async HTTP in Spring Boot
Adding spring-boot-starter-web to Spring Boot (ch10\ch10-01\ProductServer\pom.xml)
spring-boot-starter-web will add Tomcat and Spring MVC to the Product Server microservice.
Async Enabled Spring Boot Application (ch10\ch10-01\ProductServer\src\main\java\com\acme\ecom\product\EcomProductMicroserviceApplication.java)
The @EnableAsync annotation switches on Spring’s ability to run @Async methods in a background thread pool. This class also customizes the Executor backing the thread pool. By default, a SimpleAsyncTaskExecutor is used. The SimpleAsyncTaskExecutor does not reuse threads. Even though it supports limiting concurrent threads through the concurrencyLimit bean property, by default the number of concurrent threads is unlimited. In serious applications you should consider a thread-pooling TaskExecutor implementation instead.
Service Component Implementing Async Processing (ch10\ch10-01\ProductServer\src\main\java\com\acme\ecom\product\service\ProductService.java)
The class is marked with the @Service annotation , making it a candidate for Spring’s component scanning to detect it and add it to the application context. Next, the getAllProducts method is marked with Spring’s @Async annotation, indicating it will run on a separate thread. The method’s return type is CompletableFuture<List<Product>> instead of <List<Product>>, a requirement for any asynchronous service. This code uses the completedFuture method to return a CompletableFuture instance, which is already completed with the collection of results queried from the repository (a processing logic supposed to be taking considerable amount of time to complete).
Product Server Rest Controller Facilitating Async (ch10\ch10-01\ProductServer\src\main\java\com\acme\ecom\product\controller\ProductRestController.java)
Note that the CompletableFuture response returned by productService.getAllProducts() allows you to manually control the moment when the future returns and also transform the output in the process. You then convert this simple CompletableFuture into the format you need with the help of thenApply() , which allows you to log some data about the current thread to make sure that the execution really happens asynchronously, that is, the thread that is finishing the work is not the thread that started the work. The Rest Controller too then returns the collection of the products once again as another CompletableFuture<List<Product>> so that Spring MVC will also act to execute the HTTP method in an async manner, making it a smartly implemented microservice! Keep in mind that this is an independent microservice; you will next look at a dependent microservice that depends on this independent microservice (as is shown in the design).
Microservice Calling Async HTTP on Another Microservice (ch10\ch10-01\ProductWeb\src\main\java\com\acme\ecom\product\controller\ProductRestController.java)
The PRODUCT_SERVICE_URL here refers to the independent microservice. The Product Web microservice has to invoke the Product Server microservice. You have already made the Product Server microservice methods asynchronous. Let’s now make the Product Web microservice method asynchronous too so that all the containers hosting these microservices can better utilize container resources by implementing asynchronous characteristics to the runtime model. You are not done; you want to invoke the Product Server microservice too in an asynchronous mode so that both the microservice implementations as well as the intermicroservices communications are in smart, asynchronous mode!
org.springframework.web.client.AsyncRestTemplate is Spring’s central class for asynchronous client-side HTTP access. It exposes methods similar to those of RestTemplate; however, it returns ListenableFuture wrappers as opposed to concrete results. By default, AsyncRestTemplate relies on standard JDK facilities to establish HTTP connections. Here again you can switch to a different HTTP library such as Apache HttpComponents, Netty, or OkHttp by using a constructor accepting an AsyncClientHttpRequestFactory. The AsyncRestTemplate gives you a ListenableFuture , so the container thread will not wait for the response to be received back; instead it will continue with the next steps of processing since you are using AsyncRestTemplate. ListenableFuture has the capability to accept completion callbacks, so you add callbacks on failure and success scenarios. If success, you set the result to the body of the DeferredResult . Since the Spring MVC is holding on to the response through the idle HTTP connection, as soon as you set the result to the DeferredResult, the client will receive it.
Web Client Invoking Async HTTP (ch10\ch10-01\ProductWeb\src\main\resources\static\js\service\product_service.js)
Build and Test Asynchronous HTTP Between Microservices
The complete code required to demonstrate asynchronous HTTP between microservices is in folder ch10\ch10-01. You don’t require MongoDB for this sample. You can build, pack, and run the different microservices in the following order.
Refer to Appendix D to get an overview of cURL and Postman. You can use any of these tools to test the above microservice by pointing to http://localhost:8080/products/.
You can again use cURL or Postman to test the Product Web microservice pointing to http://localhost:8081/productsweb/.
Upon loading itself, the browser client will fire a request to Product Web, listening on port 8081, which will delegate the request to the Product Server microservice listening on port 8080. All is good until now, as far as the request routes are concerned. In fact, the HTTP conduit between the client and the Product Web microservice as well as the one between the Product Web microservice and the Product Server microservice are kept open and idle, and the container threads in both the microservices will return to be available for context switching. So, your Postman client or the HTML utility client will keep waiting over the idle HTTP connection. On the server side, processing happens in different threads and as soon as the results are available, they are written back to the open HTTP connection.
Console Log of ProductServer (Called) Microservice
- 1.The first line just makes a delay so that the caller (line of code from ProductRestController of the ProductWeb microservice reproduced below) will feel a perceived delay from the called microservice:ListenableFuture<ResponseEntity<List<Product>>> entity =asyncRestTemplate.exchange(PRODUCT_SERVICE_URL, HttpMethod.GET,(HttpEntity<Product>) null, responseTypeRef);
- 2.
The second line of code actually returns the executing thread, so Thread[http-nio-8080-exec-10,5,main] will be returned back to the embedded web container’s HTTP pool.
Later when CompletableFuture.completedFuture(products) resumes, it will get executed in some other thread context (Thread[SimpleAsyncTaskExecutor-1,5,main] in your case, as shown in Listing 10-11).
Console Log of ProductWeb (Caller) Microservice
Listings 10-11 and 10-12 assert that the background processing happens in threads from SimpleAsyncTaskExecutor whereas the container threads of the format http-nio-*.* have already completed the request part of the transaction and were given back to the pool. This is a powerful way of bridging the synchronous, blocking paradigm of a RESTful interface with the asynchronous, non-blocking processing performed on the server side.
Google Protocol Buffer Between Spring Boot Microservices
The previous section talked about effectively utilizing the microservices’ server resources like HTTP connections and threads. Equally important are the other possible optimizations in intercommunications between microservices. Since microservices are spread across process boundaries, the amount of data sent across microservices matter. This boils down to the marshalling and unmarshalling of data structures across microservices. You will look into this with examples in this section.
Protocol Buffer
Protocol buffers are Google’s platform-neutral and language-neutral mechanism for marshalling and unmarshalling structured data. You need to define how you want your data to be structured to use Protocol Buffer, and then you can use generated source code in multiple platforms and languages to easily write and read your structured data to and from a variety of data streams. One benefit of using Protocol Buffer is that you can even update your data structure without breaking already deployed application code that is compiled against the “old” format. This is especially important in architecting applications capable of adapting or evolving to future requirement changes.
Sample .proto File
.proto File for a Collection of Other .proto Types
You can specify fields as optional, required, and/or repeated, as shown in the Products message.
Marshalling and Unmarshalling .proto Types
You can now add new fields to your message formats without breaking backwards-compatibility; any old program binaries will ignore any new fields when parsing. So by using protocol buffers as the data format, you can extend your message protocol without having to worry about breaking existing code.
Simpler
3 to 10 times smaller
20 to 100 times faster
Less ambiguous
Generate data access classes that are easier to use programmatically
The second and third bullet points are of great significance especially when you want to design numerous chit-chat kinds of interactions between microservices, which are again too many in number in any serious enterprise-grade application.
Let’s now look at a complete example to explain the usage of Protocol Buffer between microservices.
A Scenario to Demonstrate Protocol Buffer Between Microservices
In your example design, you use a .proto file where you spec out the entities you want to use. The next step is to run the Protocol Buffer compiler for the application’s programming language (Java, in your case) on your .proto file to generate similar classes. Now both the dependent and the independent microservices can be programmed against the generated entity classes, thus making intermicroservices communication using protocol buffer a straightforward step.
Code to Use Protocol Buffer in Spring Boot
Adding a Protocol Buffer Compiler and Runtime to the Maven Build (ch10\ch10-02\ProductServer\pom.xml)
protoc-jar-maven-plugin performs protobuf code generation using the multi-platform executable protoc-jar. At build time, this Maven plugin detects the platform and executes the corresponding protoc binary so that it can compile .proto files using protoc-jar embedded protoc compiler, providing some portability across the major platforms (Linux, Mac/OSX, and Windows). In the pom.xml above, the compiler compiles .proto files in the main cycle and places the generated files into target/generated-sources, including google.protobuf standard types, and includes any additional imports required.
The protobuf-java Maven artifact provides the Java APIs required to serialize the generated message objects in Java. So it acts as the runtime for Protocol Buffer.
Note that the compiler version should be same as the Java API version. In your sample, you use the 3.1.0 version.
The protobuf-java-format Maven artifact provides serialization and deserialization of different formats based on Google’s protobuf Message. It enables overriding the default (byte array) output to text based formats such as XML, JSON, and HTML.
Type Declaration in product.proto (ch10\ch10-02\ProductServer\src\main\resources\product.proto)
Since you use version 3 of both the protocol buffer compiler and the protocol buffer language runtime, the .proto file must start with the syntax = “proto3” declaration. If a compiler version 2 is used instead, this declaration would be omitted.
The .proto file should next have a package declaration, which helps prevent naming conflicts between different type declarations across different projects. In Java, this package name is also used as the Java package unless you have explicitly specified a java_package, as you have here.
Next are two Java-specific options: the java_package and java_outer_classname. The java_package option specifies in what Java package your generated classes should live. If you don’t specify this explicitly, it matches the package name given by the package declaration. The java_outer_classname option defines the container class name, which should contain all of the classes generated in the type definition file.
Next, you add a message for each data structure you want to serialize and then specify a name and a type for each field in the message. A message is an aggregate containing a set of typed fields. Many standard simple data types are available as field types, including bool, int32, float, double, and string. You can also add other message types as field types; in your example, the Products message contains Product messages. The = 1, = 2, = 3, etc. are markers on each element to identify the unique “tag” that the field uses in the binary encoding.
required: Indicates that a value for the field must be provided; otherwise the message will be considered “uninitialized.”
optional: Indicates that the field may or may not be set. If an optional field value isn’t set, a default value is used, like zero for numeric types, the empty string for strings, false for Booleans, etc.
repeated: Indicates that the field may be repeated any number of times (including zero). The order of the repeated values will be preserved in the protocol buffer.
Product Rest Controller to Emit Protocol Buffer (ch10\ch10-02\ProductServer\src\main\java\com\acme\ecom\product\controller\ProductRestController.java)
There is no major noticeable difference in the Rest Controller except the fact that you need to import Products and Product Java types defined within the outer container class, ECom.
Configuring Protocol Buffer Message Converter (ch10\ch10-02\ProductServer\src\main\java\com\acme\ecom\product\controller\ProductRestControllerConfiguration.java)
That’s all for the Product Server microservice. Now you will look at the code for the Product Web microservices. All of the code snippets you have seen and the explanations I have made for the Product Server microservice are still valid, so I will not repeat them here for brevity; instead, I will explain any additional requirements.
Rest Template to Invoke Protocol Buffer (ch10\ch10-02\ProductWeb\src\main\java\com\acme\ecom\product\controller\ProductRestControllerConfiguration.java)
Rest Controller Delegating Calls to Microservice Spitting Protocol Buffer (ch10\ch10-02\ProductWeb\src\main\java\com\acme\ecom\product\controller\ProductRestController.java)
Build and Test the Protocol Buffer Between Microservices
Once your message structure is defined in a .proto file , you need a protoc compiler to convert this language-neutral content to Java code. Follow the instructions in the Protocol Buffer’s repository ( https://github.com/google/protobuf ) in order to get an appropriate compiler version. Alternatively, you can download a prebuilt binary compiler from the Maven central repository by searching for the com.google.protobuf:protoc artifact and then picking an appropriate version for your platform.
Because you want Java classes, you use the --java_out option; however, similar options are provided for other supported languages.
The above steps are handy, but they are not straightforward to weave into the Maven build. That’s why you used the protoc-jar-maven-plugin in the pom.xml in Listing 10-16; it performs protobuf code generation using the multiplatform executable protoc-jar. Hence no manual steps are needed to build and run the samples. Let’s get started.
The complete code required to demonstrate intermicroservices communication using Protocol Buffer is kept inside folder ch10\ch10-02. You don’t need MongoDB for this sample. You can build, pack, and run the different microservices in the following order.
Upon loading itself, the browser client will fire a request to Product Web, listening on port 8082, which will delegate the request to the Apache TCPMon Proxy listening on port 8081. This request hitting port 8081 in the host where the TCPMon is running (localhost, in your case) will be proxied to port 8080 in localhost, again where the Product Server microservice is listening. If everything goes as planned, you should be able to see the products listed on the web page. If you inspect the TCPMon Proxy console, you should be able to validate that the communication between the two microservices uses Protocol Buffer as the wire-level protocol in handshaking the HTTP-based REST invocation.
The Impact of Using Protocol Buffer
Let’s inspect the response content length while using Protocol Buffer for intermicroservices communications. For this comparison, you will look at three scenarios.
Protocol Buffer Encoding
XML Encoding
JSON Encoding
Enforce JSON Encoding (ch10\ch10-02\ProductServer\src\main\java\com\acme\ecom\product\controller\ProductRestController.java)
You can now rebuild and restart the Product Server microservice and rerun the client application by going to http://localhost:8082/.
You can now compare these three scenarios and get an idea of the variation in the size of the response while using Protocol Buffer for communication between microservices.
Summary
Microservices are like double-edged swords: they provide extra leverage but should be used with care. This is mainly due to the Inversion of Architecture discussed in Chapter 4. You learned two refactoring processes you can bring to your microservices architecture to influence the performance to a greater extent. There are more optimizations; however, I will limit them to the two discussed in this chapter. A major chunk of the rest of the book will talk about event-based microservices where the asynchronous nature of intermicroservices communications is leveraged by default using messaging instead of HTTP. You’ll start getting into those aspects in the next chapter.