From event-driven applications, there is only one minor step to event-driven architectures. Event-driven programming allows you to split your application into isolated components that communicate with each other only by passing events or signals. If you already did this, you should be also able to split your application into separate services that do the same, but transfer events to each other, either through some kind of IPC mechanism or over the network.
Event-driven architectures transfer the concept of event-driven programming to the level of inter-service communication. There are many good reasons for considering such architectures:
- Scalability and utilization of resources: If your workload can be split into many order-independent events, architectures that are event-driven allow the work to be easily distributed across many computing nodes (hosts). The amount of computing power can be also dynamically adjusted to the number of events being processed in the system currently.
- Loose coupling: Systems that are composed of many (preferably small) services communicating over queues tend to be more loosely coupled than monolithic software. Loose coupling allows for easier incremental changes and the steady evolution of system architecture.
- Failure resiliency: Event-driven systems with proper event transport technology (distributed message queues with built-in message persistency) tend to be more resilient to transient issues. Modern message queues, such as Kafka or RabbitMQ, offer multiple ways to ensure that the message will always be delivered to at least one recipient and are able to ensure that the message will be redelivered in case of unexpected errors.
Event-driven architectures work best for problems that can be dealt with asynchronously, such as file processing or file/email delivery, or for systems that deal with regular and/or scheduled events (for example, cron jobs). In Python, it can also be used as a way of overcoming the CPython interpreter's performance limitations (such as GIL, which was discussed in Chapter 15, Concurrency).
Last, but not least, event-driven architectures seem to have a natural affinity to serverless computing. In this cloud-computing execution model, you're not concerned about infrastructure and don't have to purchase computing capacity units. You leave all of the scaling and infrastructure management for your cloud service operator and provide them only with your code to run. Often, the pricing for such services is based only on the resources that are used by your code. The most prominent category of serverless computing services is Function as a Service (FaaS), which executes small units of code (functions) in response to events.
In the next section, we will discuss event and message queues.