Event-driven architectures

From event-driven applications, there is only one minor step to event-driven architectures. Event-driven programming allows you to split your application into isolated components that communicate with each other only by passing events or signals. If you already did this, you should be also able to split your application into separate services that do the same, but transfer events to each other, either through some kind of IPC mechanism or over the network.

Event-driven architectures transfer the concept of event-driven programming to the level of inter-service communication. There are many good reasons for considering such architectures:

Event-driven architectures work best for problems that can be dealt with asynchronously, such as file processing or file/email delivery, or for systems that deal with regular and/or scheduled events (for example, cron jobs). In Python, it can also be used as a way of overcoming the CPython interpreter's performance limitations (such as GIL, which was discussed in Chapter 15Concurrency).

Last, but not least, event-driven architectures seem to have a natural affinity to serverless computing. In this cloud-computing execution model, you're not concerned about infrastructure and don't have to purchase computing capacity units. You leave all of the scaling and infrastructure management for your cloud service operator and provide them only with your code to run. Often, the pricing for such services is based only on the resources that are used by your code. The most prominent category of serverless computing services is Function as a Service (FaaS), which executes small units of code (functions) in response to events.

In the next section, we will discuss event and message queues.