Now, let's take a look at some anti-patterns that you might encounter when working with microservice-based projects and explore alternative ways of dealing with them.
Sharing a database is probably the biggest mistake that engineers new to the microservice pattern make when they attempt to split a monolith into microservices for the first time. As a rule of thumb, each microservice must be provisioned with its own, private data store (assuming it needs one) and expose an API so that other microservices can access it. This pattern provides us with the flexibility to select the most suitable technology (for example, NoSQL, relational) for the needs of each particular microservice.
Communication between microservices might fail for a variety of reasons (for example, a service crash, network partition, or lost packets). A correct microservice implementation should operate under the assumption that outbound calls can fail at any time. Instead of immediately bailing out with an error when things go wrong, microservices should always implement some sort of retry logic.
A corollary to the preceding statement is that when a connection to a remote microservice drops before receiving a reply, the client cannot be sure whether the remote server actually managed to process the request. Based on the preceding recommendation, the client will typically retry the call. Consequently, every microservice that exposes an API must be written in such a way so that requests are always idempotent.
Another common anti-pattern is to allow a service to become a single point of failure for the entire system. Imagine a scenario where you have three services that all depend on a piece of data that's been exposed by a fourth, downstream service. If the latter service is underprovisioned, a sudden request in traffic to the three upstream services might cause requests to the downstream service to time out. The upstream services would then retry their requests, increasing the load on the downstream service even further, up to the point where it becomes unresponsive or crashes. As a result, the upstream services now begin experiencing elevated error rates that affect calls that are made to them by other upstream services, and so on and so forth.
To avoid situations like this, microservices can implement the circuit breaker pattern: when the number of errors from a particular downstream service exceeds a particular threshold, the circuit breaker is tripped and all future requests automatically fail with an error. Periodically, the circuit breaker lets some requests go through and after a number of successful responses, the circuit breaker switches back to the open position, allowing all requests to go through.
By implementing this pattern into your microservices, we allow downstream services to recover from load spikes or crashes. Moreover, some services might be able to respond with cached data when downstream services are not available, thus ensuring that the system remains functional, even in the presence of problems.
As we have already explained, microservice-based architectures are inherently complex as they consist of a large number of moving parts. The biggest mistake that we can make is switching to this kind of architecture before laying down the necessary infrastructure for collecting the log output of each microservice and monitoring its health. Without this infrastructure in place, we are effectively flying blind. In the next section, we will explore a few different approaches to microservice instrumentation and monitoring.