© Binildas Christudas 2019
Binildas ChristudasPractical Microservices Architectural Patternshttps://doi.org/10.1007/978-1-4842-4501-9_13

13. Distributed Transactions

Binildas Christudas1 
(1)
Trivandrum, Kerala, India
 

You looked at using the Axon framework to implement different scenarios of the CQRS pattern in the previous chapter. In this chapter, you will be revisiting Axon for more scenarios in the microservices context, including executing long-running computational processing. To do that effectively, let’s start the discussion with a few basics, including transactions. A transaction in its simple form is a change or exchange of values between entities. The parties involved can be a single entity or more than one entity. When the party involved is a single entity, typically the transaction is a local transaction. If more than one party is involved, the transaction can be classified as either a local transaction or a distributed transaction, depending on the nature and location of the entities involved.

A transaction may involve a single step or multiple steps. Whether a single step or multiple steps, a transaction is a single unit of work that embodies these individual steps. For the successful completion of a transaction, each of the involved individual steps should succeed. If any of the individual steps fail, the transaction should undo any of the effects of the other steps it has already done within that transaction.

With this preface, you will look into the details of transactions, especially distributed transactions. I will only touch base on the essentials of transactions; I’ll spend more time on the practical implication aspects, especially in the context of microservices. Detailing out transactions to any more depth would require its own complete book, so this coverage will be limited to discussions around microservices and few practical tools you can use to build reliable microservices.

In this chapter you will learn about
  • The indeterministic characteristics of computer networks

  • Transaction aspects in general

  • Distributed transactions vs. local transactions

  • ACID vs. BASE transactions

  • A complete example demonstrating distributed transactions using multiple resource managers

The Two Generals Paradox

In computer networking (particularly with regard to the Transmission Control Protocol), literature shows that TCP can’t guarantee (complete) state consistency between endpoints. This is illustrated by the Two Generals Paradox (also known as the Two Generals Problem or the Two Armies Problem or the Coordinated Attack Problem), which is a thought experiment illustrating the pitfalls and connected design challenges while attempting to coordinate an action by communicating over an unreliable link.

The Two Generals Paradox is unsolvable in the face of arbitrary communication failures; however, the same problem provides a base for realistic expectations for any distributed consistency protocols. So to understand the problem, you need to understand the associated indeterminism involved, which will provide a perfect analogy to learn more about transactions.

Illustrating the Two Generals Paradox

This section will illustrate the Two Generals Paradox with the classical example scenario. Figure 13-1 shows two armies, each led by a different general who are jointly preparing to attack a fortified city, shown in the center. Both armies are encamped near the city, but in their own valleys. A third valley separates the two hills of the valleys occupied by the two generals, and the only way for the two generals to communicate is by sending messengers across the third valley. The third valley is occupied by the city’s defenders, so there’s a chance that any messenger sent through the valley may be captured.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig1_HTML.jpg
Figure 13-1

Illustrating the Two Generals Paradox

The two generals have agreed that they will attack; however, they have yet to agree upon a time for attack. The attack will only succeed if both armies attack the city at the same time; the lone attacker army will die trying. So the generals must communicate with each other to decide on a time to attack and agree to jointly attack at that time; each general must also know that the other general knows that they have jointly agreed to the attack plan. An acknowledgement of message receipt can be lost as easily as the original message itself, thus a potentially infinite series of messages are required to come to (near) consensus. Both generals will always be left wondering whether their last messenger got through.

Solution Approaches

Computer networks are the spine of distributed systems; however, the reliability of any network has to be examined. Distributed systems are required to address scalability and flexibility of software systems, so a network is a necessary evil. Schemes should accept the uncertainty of the networks and not attempt to eliminate it; rather, they should mitigate it to an acceptable degree.

In the Two Generals Paradox, you can think of many acceptable mechanisms, such as
  • The first general can send 50 messengers, anticipating that the probability of all 50 being captured is low. This general will have fixed the time of attack to include enough time for all or a few or many of those 50 messengers to reach the other general. He will then attack at the communicated time no matter what, and the second general will also attack if any of the messages are received.

  • A second approach is where the first general sends a stream of messages and the second general sends acknowledgments to each received message, with each general’s comfort increasing with the number of messages received.

  • In a third approach, the first general can put a marking on each message saying it is message 1 of n, 2 of n, 3 of n, etc. Here the second general can estimate how reliable the channel is and send an appropriate number of messages back to ensure a high probability of at least one message being received. If the network is highly reliable, then one message will suffice and additional messages won’t help; if not, the last message is as likely to get lost as the first one.

Microservices as Generals

When you have split the state of a monolith that has been “living happily thereafter, together” to the state of “living happily ever after, separated,” you have split the army from one general into multiple generals. Each microservice has their own general or is a general by itself, and more than one general may be required to coordinate any useful functionality. By adopting polyglot, each microservice will have its own resource, whether it’s a database or file storage. The network is the only way these microservices can coordinate and communicate. The effect of changes made to resources cross-microservices has to be made consistent. This consistency depends on which schema or protocol you choose to coordinate and communicate the commands to make changes and the acknowledgements, which act as the agreement to effect that change between microservices.

TCP/IP, the Valley Between the Generals

The Internet Protocol (IP) is not reliable. It may be delayed or dropped or duplicate data can come or data can come in order not as per the original intention. The Transmission Control Protocol adds a more reliable layer over IP. TCP can retransmit missing packets, eliminate duplicates, and assemble packets in the order in which the sender intended to transmit. However, while TCP can hide packet loss, duplication, or reordering, it may not be able to remove delays in the network, since they are packet switched. Traditional fixed-line telephone networks are extremely reliable, and delayed audio frames and dropped calls are very rare. This is because a fixed, guaranteed amount of bandwidth is allocated for your call in the network, along the entire route between the two callers; this is referred to as circuit switched. However, data center networks and the Internet are not circuit switched. Their backbone, the Ethernet and IP, are packet switched protocols, which suffer from queueing, which can cause unbounded delays in the network.

Each microservice has its own memory space for computation and resources for persistence and storage, and one microservice cannot access another microservice’s memory or resource (except by making requests to the other microservice over the network, through an exposed API). The Internet and most internal networks in data centers (often Ethernet) have the characteristics explained above, and these networks become the only way microservices can coordinate and communicate. In this kind of network, one microservice can send a message to another microservice, but the network gives no guarantees as to when it will arrive or whether it will arrive at all. If you send a request and expect a response, many things could go wrong in between.

The sending microservice can’t even tell whether the request or message was delivered. The next best option is for the recipient microservice to send a response message, which may also in turn be lost or delayed. These issues are indistinguishable in an asynchronous network. When one microservice sends a request to another microservice and doesn’t receive a response, it is impossible to tell why. The typical way of handling such issues is by using timeouts: after some time you give up waiting and assume that the response is never going to arrive. However, if and when a timeout occurs, you still don’t know whether the remote microservice got your request or not, and if the request is still queued somewhere, there is a chance that it may still be delivered to the recipient microservice, even if the sender microservice has given up on the previous request and might have resent the request, in which case a duplicate request will get fired. Advanced solutions to this problem are addressed in the reference.1

Transactions

Transactions help you control how access is given to the same data set, whether it is for read or write purposes. Data in a changing state might be inconsistent, hence other reads and writes should have to wait while the state transitions from “changing” to either “committed” or “rolled back.” You will look at the details related to transactions now.

Hardware Instruction Sets at the Core of Transactions

For transactions at a single database node, atomicity is implemented at the storage level. When a transaction commits, the database makes the transaction’s writes durable (typically in a write-ahead log2) and subsequently appends a commit record to the log on disk. So, the controller of the disk drive handling that write takes a crucial role in asserting that the write has happened.

A transaction once committed cannot be undone. You cannot change your mind and retroactively undo a transaction after it has been committed. This is because, once data has been committed, it may become visible to other transactions and thus other transactions may have already started relying on that data.

The ACID in the Transaction

For a transaction to comply with the specification, it should exhibit the ACID (Atomicity, Consistency, Isolation, and Durability) properties. Let’s look into them one by one.
  • Atomicity : The outcome of a transaction, if it’s a commit, is that all of the transaction’s writes are made durable as a single unit of work. If it’s an abort, then all of the transaction’s writes are rolled back (i.e., undone or discarded) as a single unit of work. If you go back to the previous section where I described that it’s the single controller of one particular disk drive that sits in the core and takes the commit or rollback decision, you need to appreciate that whether it’s the write of a single word3 or multiple words (depending on 32-bit or 64-bit), it’s the support available from the hardware level that helps you club them together as the write of a single unit of work.

  • Consistency : This property guarantees that a transaction will leave the data in a consistent state, irrespective of whether the transaction is committed or rolled back. To understand what this “consistent state” means, it refers to the adherence of the constraints or rules of the database. It is common to model business rule constraints in terms of database integrity constraints. So, maintaining consistency is a dual effort by both a resource manager and the application. While the application ensures consistency in terms of keys, referential integrity, and so on, the transaction manager ensures that the transaction is atomic, isolated, and durable. For example, if your transaction is to assign the last seat on the flight to one of two travelers who are concurrently booking for that same flight, the transaction will be consistent if one seat is removed from the inventory of free seats and assigned to one and only one of the travelers and at the same time show that the same seat is not assigned or assignable any more to the other traveler trying booking concurrently.

  • Isolation : The isolation property guarantees that the changes ongoing in a transaction will not affect the data accessed by another transaction. This isolation can be controlled to finer levels, and “Transaction Isolation Mechanisms" section describes them.

  • Durability : This property requires that when a transaction is committed, any changes the transaction makes to the data must be recorded permanently. When a transaction commits, the database makes the transaction’s writes durable (typically in a write-ahead log) and subsequently appends a commit record to the log on disk. If the database crashes in the middle of this process, the transaction can be recovered from the log when the database restarts. In cases where the commit record is already written to disk before the crash, the transaction is considered committed. If not, any writes from that transaction are rolled back. So the moment at which the disk finishes writing the commit record is the single deciding point to commit; before that moment, it is still possible to abort (due to a crash), but after that moment, the transaction is committed (even if the database crashes just afterwards). So, it is the controller of the disk drive handling the write that makes the commit atomic. Having said that, if an error or crash happens during the process, the above logs can be used to rebuild or recreate the data changes .

Transaction Models

Transaction models refer to how the coordination of individual transactions under the context of an enclosing transaction are structured. You will look at the major transaction models here:
  • Flat transactions: In a flat transaction, when one of the steps fails, the entire transaction is rolled back. In a flat transaction, the transaction completes each of its steps before going on to the next one. Each step accesses corresponding resources sequentially. When the transaction uses locks, it can only wait for one object at a time.

  • Nested transactions: In a nested transaction, atomic transactions are embedded in other transactions. The top-level transaction can open subtransactions, and each subtransaction can open further subtransactions, and this nesting can continue. The effect of an individual embedded transaction will not affect the parent transaction. When a parent aborts, all of its subtransactions are aborted. However, when a subtransaction aborts, the parent can decide whether to abort or not. In a nested transaction, a subtransaction can be another nested transaction or a flat transaction.

  • Chained transactions: In a chained transaction, each transaction relies on the result and resources of the previous transaction. A chained transaction is also called a serial transaction. Here the distinguishing feature is that when a transaction commits, its resources, like cursors, are retained and are available for the next transaction in the chain. So, transactions inside the chain can see the result of the previous commit within the chain; however, transactions outside of the chain cannot see or alter the data being affected by transactions within the chain. If one of the transactions should fail, only that one will be rolled back and the previously committed transactions will not.

  • Saga: Sagas are similar to nested transactions; however, each of the transactions has a corresponding compensating transaction. If any of the transactions in a saga fail, the compensating transactions for each transaction that was successfully run previously will be invoked.

Transaction Attributes in EJB vs. Spring

Both Spring and EJB give the user the freedom to choose from programmatic and declarative transaction management. For programmatic transaction management, you need to code against the JDBC and JTA APIs. With the declarative approach, you externalize transaction control to configuration files. Also, you need to choose from the available transaction attributes to get the required behavior. The EJB specification defines six basic transaction attributes. Subsequently, Spring has counterparts for all six transaction attributes. In fact, Spring has more:
  • PROPAGATION_REQUIRED (REQUIRED in EJB): Supports a current transaction; creates a new one if none exist.

  • PROPAGATION_REQUIRES_NEW (REQUIRES_NEW in EJB): Creates a new transaction; suspends the current transaction if one exists.

  • PROPAGATION_NOT_SUPPORTED (NOT_SUPPORTED in EJB): Executes non-transactionally; suspends the current transaction if one exists.

  • PROPAGATION_SUPPORTS (SUPPORTS in EJB): Supports a current transaction; executes non-transactionally if none exist.

  • PROPAGATION_MANDATORY (MANDATORY in EJB): Supports a current transaction; throws an exception if none exist.

  • PROPAGATION_NEVER (NEVER in EJB): Executes non-transactionally; throws an exception if a transaction exists.

  • PROPAGATION_NESTED (No equivalent in EJB): Executes within a nested transaction if a current transaction exists, or else behave like PROPAGATION_REQUIRED. There is no analogous feature in EJB. Actual creation of a nested transaction will only work on specific transaction managers. Out of the box, this only applies to the JDBC DataSourceTransactionManager when working on a JDBC 3.0 driver. Some JTA providers might support nested transactions as well.

Transaction Isolation Mechanisms

Transaction isolation refers to the protection of one transaction from the effects of the other, when multiple concurrent transactions happen, often on the same data set. Transaction managers rely on two mechanisms to achieve transaction isolation:
  • Locking: Locking controls access to a transaction to a particular data set. Read locks are non-exclusive and allow multiple transactions to read data concurrently. However, write locks are exclusive locks and only a single transaction is allowed to update data.

  • Serialization: When multiple transactions happen concurrently, serialization is a mechanism to guarantee that the effects are as if they are executing sequentially, not concurrently. Locking is a means of enforcing serialization.

Transaction Isolation Levels

The types of serialization and locks used as well as the extent to which they are affected determine the level of isolation that a transaction will execute under. Java Enterprise Edition specifies the following types of isolation levels, as defined in the java.sql.Connection interface:
  • TRANSACTION_NONE: Indicates that transactions are not supported.

  • TRANSACTION_READ_COMMITTED: Indicates that dirty reads are prevented; non-repeatable reads and phantom4 reads can occur.

  • TRANSACTION_READ_UNCOMMITTED: Indicates that dirty reads, non-repeatable reads, and phantom reads can occur. This level allows a row changed by one transaction to be read by another transaction before any changes in that row have been committed (a dirty read). If any of the changes are rolled back, the second transaction will have retrieved an invalid row.

  • TRANSACTION_REPEATABLE_READ: Indicates that dirty reads and non-repeatable reads are prevented; phantom reads can occur.

  • TRANSACTION_SERIALIZABLE: Indicates that dirty reads, non-repeatable reads, and phantom reads are prevented. This level prohibits the situation where one transaction reads all rows that satisfy a WHERE condition, a second transaction inserts a row that satisfies that WHERE condition, and the first transaction rereads for the same condition, retrieving the additional phantom row in the second read.

The level of relaxation or the strictness of enforcement of the serialization and locks are determined by the type of concurrency the application is willing to tolerate with respect to staleness of data. The next section will explain this concept.

Transaction Concurrency

When multiple concurrent transactions happen, often on the same data set, the effect of one transaction from the effects of the other can be controlled via the level of concurrency the application is willing to tolerate. The levels are the following:
  • Dirty read: Dirty reads happen when a transaction reads data that has been written by another transaction but has not yet been committed by the other transaction.

  • Non-repeatable read: If a transaction reads data, and if it gets a different result if it rereads the same data within the same transaction, it is a non-repeatable read.

  • Phantom read: When a single transaction reads against the same data set more than once, a phantom read happens when another transaction slips in and inserts additional data.

Transaction Isolation Control Methods

Strict levels of transaction isolation may cause negative performance scenarios, and in such cases there are two general approaches to locking:
  • Optimistic locking: Optimistic locking takes a pragmatic approach and encourages clients to be “optimistic” that data will not change while they are using it and allows them to access same data concurrently. If for any reason the transaction on behalf of any particular client wants to update, the update is committed if and only if the data that was originally provided to the client is the same as the current data in the database. This approach is less ideal for hot-spot data because the comparison will fail often.

  • Pessimistic locking: Pessimistic locking may use semaphore5 or any of the transaction mechanisms previously discussed. For every read, you need a read lock, and for every write, you need a write lock. Read locks are not exclusive whereas write locks are. A typical read lock may be obtained when you query similar to

    SELECT ∗ FROM QUOTES_TABLE WHERE QUOTES_ID=5 FOR UPDATE;

Both locks will be released only when the usage of data is completed. A read lock can be given to a transaction on request, provided no other transaction holds the write lock. For a transaction to update, it requires a write lock, and during the duration that transaction holds the write lock, other transactions are allowed to read the data if and only if they don’t require a read lock. Moreover, if a transaction holds a write, it will not allow another transaction to read its changing data until those changes have been committed.

Enterprise Transaction Categories

Transactions have different connotations depending on the context in which we use them. It can be in the context of multiple operations within a monolith system, or in the context of multiple monoliths or multiple microservices, or in the context of multiple enterprises with common (or conflicting) interests. You will briefly look at the different transaction categories in this context.

ACID Transactions

ACID transactions are related to atomic transactions where multiple steps or operations are executed with the intent that either all of them succeed together or all of them return back to the previous state due to the failure of one or more of the steps. A major portion of the discussion in previous sections of this chapter concerned ACID transactions, so enough explanation has already been done. You will now look at the architecture of an ACID transaction processing (TP) system. Figure 13-2 depicts a typical architecture consisting of multiple resource managers.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig2_HTML.jpg
Figure 13-2

Distributed ACID transactions with multiple resources

BASE = ACID in Slices

ACID is the only style that can guarantee data consistency. However, if a distributed system uses ACID transactions, everything has to happen in lockstep mode, everywhere. This requires all components to be available and it also increases the lock times on the data. BASE is an architectural style that relaxes these requirements, simply by cutting the (overall, ideal) ACID transaction into smaller pieces, each of them still ACID in themselves.

Messaging is used to make changes ripple through the system, where each change is ideally processed by an ACID transaction. The messages are asynchronous rather than synchronous, meaning that there is a delay between the first and last ACID transaction in a BASE system.

This is why this style exhibits “eventual consistency:” one has to wait for all messages to ripple through the system and be applied where appropriate. The “Eventual Consistency of Microservices” section in Chapter 11 discussed this in detail.

I’ll now introduce BASE in various incremental steps, by cutting more and more into the ACID transaction scope.

BASE Transactions

Figure 13-3 shows a BASE architecture with four different microservices. Each microservice still has ACID properties (independent of the other microservices of the system).
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig3_HTML.jpg
Figure 13-3

BASE architecture where each microservice still has ACID guarantees by means of XA transactions

This is the architecture of a microservice ecosystem in BASE style that sends or receives messages in ACID style. You will look at the functionality of each microservice in a later section (the “Design the Example Scenario” subsection under the “Distributed Transaction Example” section); however, pay attention to the technical components depicted here for the time being. It is BASE because via ActiveMQ you don’t know or care where the messages go to or come from; all that the quote processing and quote settlement see between them is the queues (whereas a classical ACID architecture would create one big ACID transaction for quote settlement and quote processing). You use JTA/XA to ensure that messages are processed exactly once, since you don’t want to lose messages (or get duplicates). Messages that are received from the queue are guaranteed to make it to the database, even with intermediate crashes. Messages that are sent are guaranteed to have their domain object status updated in the database, even with intermediate crashes.

For systems that require strong guarantees on data consistency, this is a good solution. In other systems, strong data consistency is not that vital and you can relax things even more–by using an even finer scope of ACID transactions in your BASE architecture.

When you transition to microservices, you want to avoid system-wide distributed transactions. However, your actual data is distributed more compared to the scenario of the monolith, so the need for distributed transactions is very real. And it is in this context we are talking about eventual consistency in place of instantaneous consistency. The “Eventual Consistency of Microservices” section in Chapter 11 covered this in detail.

Relaxed BASE Transactions

In some cases, you don’t care a lot about message loss or duplicates, and the scope of your ACID transactions can be limited to single resource managers, like purely local transactions only. This offers more performance but it has a cost: whereas so far the BASE system had eventual consistency, this architecture no longer guarantees that by default.

As will be shown in the next chapter, to get some level of consistency, a lot more work has to be done by the application developers; in some cases, eventual consistency may even be impossible, notably in the case of message loss.

Figure 13-4 shows a typical architecture consisting of multiple resource managers in a relaxed BASE transaction.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig4_HTML.jpg
Figure 13-4

Distributed, relaxed BASE transactions with multiple resources

Microservice resources are distributed across nodes. You want to avoid distributed transactions since locking of resources across nodes will impede system scalability. Further, you are OK for eventual consistency but you don’t mind if you can’t achieve it perfectly. If so, Figure 13-4 illustrates how to follow the relaxed BASE model. Applications interact either directly with one or more of the involved resource managers, or, more typically, through some load balancer or gateway that distributes incoming requests to a pool of resource managers with symmetrical responsibilities and capabilities. Requests are served by one or more resource managers, and changes are propagated to one or more other resource managers subsequently. The number of resource managers involved depends on the sharding model (covered in Chapter 11’s “The Scale Cube” section) and the system configuration chosen, ranging from a single node (single replica) to a quorum of nodes to, theoretically, all replicas of a given data record.

Axon provides many features with which you can span BASE transactions across resource managers without using a distributed transaction manager or transaction coordinator. To rephrase, there is no distributed transaction manager or transaction coordinator in a BASE transaction; still, transactions can be propagated across multiple resource managers to have some level of eventual consistency. Figure 13-5 depicts this scenario where you show the key components in Axon that make this happen.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig5_HTML.jpg
Figure 13-5

Axon components to support BASE transactions

You will look at typical relaxed BASE transactional steps, referring to Figure 13-5. A write request from a browser will come as a HTTP PUT at the HTTP server or the load balancer.
  1. 1.

    The request is routed to the API gateway.

     
  2. 2.

    The API gateway routes this request to the first point of contact of the request to the Axon application, which is the (front) REST Controller.

     
  3. 3.

    The REST Controller, after unpacking the request, interprets it as a request for an entity change and creates a command, following the CQRS pattern. This command is send to the distributed command bus, starting a local transaction.

     
  4. 4.

    The distributed command bus transports this command to the node containing the respective command handler (Resource Manager 3).

     
  5. 5.

    The command handler effects the write to the entity with the help of the repository in the Write DB.

     
  6. 6.

    The above repository change also triggers an event (the “Design the Example Scenario” section under the “Command and Event Handling in Same JVM” section in Chapter 12 provides detailed narration) and this event is sent to the clustered event bus.

     
  7. 7.

    In Figure 13-5, you can see that the two different event handlers (Resource Manager 1 and Resource Manager 2) are interested in the previous event, and they may also do whatever state changes they dictate.

     
  8. 8.

    You can see that Resource Manager 2 is in effect synchronizing itself (the Read DB) of any interesting changes in the system.

     
  9. 9.

    Any subsequent query (HTTP GET) also sees these changes. Eventually, everyone see the changes.

     

ACID vs. BASE

Such a comparison is like comparing apples and oranges: they are not intended to replace each other, and both are equally good (or bad) at serving their own purpose. This is an easily said statement; however, no CTO-level discussion can end with that straight explanation. Having said that, understanding this distinction is key in “mastering microservices.” If you understand this difference, the main intention of this book is met!

BASE transactions are the base for microservices architecture where we want to stay away from ACID transactions. However, they serve different purposes. There is not much point in debating one over the other, since both solve orthogonal concerns–the reason why the CAP theorem is significant. But then, do you want to completely put aside ACID transactions? It’s like asking whether you as the captain of your fighter jet should have direct control of the seat ejection button or are you comfortable even if you don’t have direct reach to it but you are (almost) sure that your co-pilot will activate it for you at your command? In the former case, you are very sure that you can escape instantaneously, at your own will, when a sudden need arises. In the latter case, you are almost sure that you will still be able to escape; however, there is a percentage of skepticism as to whether the escape will be instantaneous or delayed either due to your co-pilot taking time to execute your command or in the unfortunate event that your co-pilot is not in a position to move his own muscles to act! To restate, the former is synonymous to an ACID transaction, which is relatively more deterministic, whereas the latter is synonymous to a BASE transaction, which (may) eventually happen.

We still need ACID, if not all the time.

Let’s now come to BASE. If you still use ACID at the right level (as in Figure 13-3), then things are easy. On the other hand, an even more relaxed BASE approach is not going to ease your life; in fact, relaxed BASE is going to give you far more complexities than ACID.

Having said that, one way to understand and appreciate the complexities involved in BASE transactions is to understand ACID transactions themselves, so you will do that in the following sections.

Distributed Transactions Revisited

Before you look into concrete examples, you need to understand a few concepts that will set the context for the examples you will explore.

Local Transactions

If you take a typical resource manager, typically that single resource will be confined to a single host or node (even though that may not be the case mandatorily). Operations confined to such a single resource are local transactions and they affect only one transactional resource. Within a single node, there are less (or for practical considerations, nil) nondeterministic operations, hence a command sent to a single node must be considered deterministic, and in case of any catastrophes, there are local recovery mechanisms. These resources have their own transactional APIs, and the notion of a transaction is often exhibited as the concept of a session, which can encapsulate a unit of work with demarcating APIs to tell the resource when the buffered work should be committed to the underlying resource. Thus, from a developer perspective, you do not manage with transactions in a local transaction, but with just “connections.”

The java.sql.Connection interface is a transactional resource that can wrap a database. By default, a Connection object is in auto-commit mode, which means that it automatically commits changes after executing each statement. If auto-commit mode is disabled, the method commit must be called explicitly in order to commit changes; otherwise, database changes will not be saved. It is preferable to collect several related statements into a batch and then commit all, or none, when you have more than one statement. You do this by first setting the Connection’s setAutoCommit() method to false and later explicitly calling Connection.commit() or Connection.rollback() at the end of the batch. See Listing 13-1.
try{
    connection.setAutoCommit(false);
    Statement statement = conn.createStatement();
    String insertString = "insert into " +  dbName + ".SHIPPING VALUES ("
        + orderId + ", 'NEW')";
    statement.executeUpdate(insertString);
    String updateString = "update " + dbName + ".ORDERS set STATUS =
        'PROCESSED' where ORDER_ID = " + orderId;
    statement.executeUpdate(updateString);
    connection.commit();
}catch(SQLException se){
    connection.rollback();
}
Listing 13-1

A Local Transaction Sample

In Listing 13-1, both the statements will be committed together or rolled back together. However, it is a local transaction, not a distributed transaction.

Distributed Transactions

Typically for the scenario of a distributed transaction to exist, it must span at least two resource managers. Databases, message queues, transaction processing (TP) monitors like IBM’s CICS, BEA’s Tuxedo, SAP Java Connector, and Siebel Systems are common transactional resources, and often, if the transactions has to be distributed, it has to span a couple of such resources. A distributed transaction can be seen as an atomic operation that must be synchronized (or provide ACID properties) among multiple participating resources that are distributed among different physical locations.

Java EE application servers like Oracle Weblogic and IBM Websphere support JTA out of the box, and there are third-party, standalone implementations of JTA like
  • JOTM: JOTM is an open source transaction manager implemented in Java. It supports several transaction models and specifications providing transaction support for clients using a wide range of middleware platforms (J2EE, CORBA, Web Services, OSGi).

  • Narayana: Narayana, formerly known as JBossTS and Arjuna Transaction Service, comes with a very robust implementation that supports both the JTA and JTS APIs. It supports three extended transaction models: nested top-level transactions, nested transactions, and a compensation-based model based on sagas. Further, it also supports web service and RESTful transactions. There is a need for manual integration with the Spring framework, but it provides out-of-the-box integration with Spring Boot.

  • Atomikos TransactionsEssentials: Atomikos TransactionsEssentials is a production quality implementation that also supports recovery and some exotic features beyond the JTA API. It provides out-of-the-box Spring integration and support for pooled connections for both database and JMS resources.

  • Bitronix JTA: Bitronix claims to support transaction recovery as well as or even better than some of the commercial products. Bitronix also provides connection pooling and session pooling out of the box.

Distributed Transactions in Java

Coordination of transactions spanning multiple resources are specified by the X/Open standards by opengroup. Java supports X/Open standards by providing two interfaces: JTA (Java Transaction API) and JTS (Java Transaction Service) . As shown in Figure 13-2, JTA is used by application developers to communicate to transaction managers. Since resources can be provided by multiple vendors following different platforms and programming languages, if all of these resources must be coordinated, they have to agree again to the X/Open standards. Here, JTS, which follows CORBA OTS (Object Transaction Service) , provides the required interoperability between different transaction managers sitting with distributed resources.

I have discussed the “distributed nature” in distributed transactions. The two-phase commit protocol is used for coordination of transactions across multiple resource managers.

Distributed Transaction Example Using MySQL, ActiveMQ, Derby, and Atomikos

The easiest way to do away with many data consistency problems, especially when multiple resource managers are involved, is to use XA or distributed transactions even in BASE, as in Figure 13-3. However, if you want to transition to even more relaxed BASE microservices, the design mechanisms have to be more fault tolerant since your XA transaction managers are absent, and they would have done all the hard work. You will walk through the major consistency concern scenarios with the help of an example. Again, I could have demonstrated the concern scenarios straight without using distributed transactions in the example; however, I will take the reverse route and use XA transactions to illustrate the perfect scenario and simulate the various fault conditions by cutting down the ACID scope even more since it is rather easy for you, the reader, to comprehend aspects in this manner. XA transactions allow us to do that by “abusing” the transaction attributes to make the transaction scope smaller than it would normally be. This means that I can reuse the same code to illustrate the anomalies of relaxed BASE incrementally.

The Example Scenario

The example is not trivial, as shown in Figure 13-3, so the scenario requires a little explanation. The example scenario is a simple stock trade processing system. Here are the building blocks:
  1. 1.

    New quotes for stock transactions can be pushed to the Broker Web microservice. New quotes get inserted into a Quotes table, which is in a MySQL DB, with a status of “New.”

     
  2. 2.

    The Quotes processor task is a quartz scheduled task and it polls the Quotes table in the MySQL DB for any new quotes with a status of “New.”

     
  3. 3.

    When a new quote with a status of “New” is found, it invokes the processNewQuote method of the broker service, always with a transaction, passing the unique identifier for the new quote into the Quotes table.

     
  4. 4.

    The broker service makes use of the other transactional services, the auction service and the stock order service, and the execution of both has to be atomic.

     
  5. 5.

    The auction service confirms the quote received by changing the status of the quote to “Confirmed” within a transaction.

     
  6. 6.

    The stock order service creates a JMS message out of the information contained in the new quote and is sent to an ActiveMQ queue for settlement of the quote, again within the above (5) transaction.

     
  7. 7.

    The settlement listener service is listening on the ActiveMQ queue for any new confirmed quote. All confirmed quotes on reaching the ActiveMQ queue are picked up by onMessage of the settlement listener service, within a transaction.

     
  8. 8.

    The settlement listener service invokes the quotes reconcile service for reconciliation of the quote, again within the above (7) transaction.

     
  9. 9.

    The quotes reconcile service needs to reconcile the value of the stock traded to the respective user account of the seller and the buyer, within the above (7) transaction.

     
  10. 10.

    The Broker Web microservice and the Settlement Web microservice are just utilities that provide a dashboard view of the operational data on both the Quotes table as well as the User table to provide live views.

     

Having seen the high-level business of the example scenario, let’s now pay attention to other infrastructural aspects in the architecture. The Quote Processing microservice has to do atomic operations across two resources, MySQL and ActiveMQ. You need a distributed transaction manager here. Similarly, the Quote Settlement microservice must also do atomic operations across two resources, Derby and ActiveMQ. You need a distributed transaction manager here also.

Code the Example Scenario

The complete code required to demonstrate the distributed transaction example is in folder ch13\ch13-01. There are four microservices to be coded. You will look into them one by one.

Microservice 1: Quote Processing (Broker-MySQL-ActiveMQ)

Visit pom.xml to see the Atomikos distributed transaction manager dependency. See Listing 13-2.
<dependencies>
    <dependency>
        <groupId>javax.transaction</groupId>
        <artifactId>jta</artifactId>
        <version>1.1</version>
    </dependency>
    <dependency>
        <groupId>com.atomikos</groupId>
        <artifactId>transactions</artifactId>
        <version>3.9.3</version>
    </dependency>
    <dependency>
        <groupId>com.atomikos</groupId>
        <artifactId>transactions-hibernate3</artifactId>
        <version>3.9.3</version>
    </dependency>
    <dependency>
        <groupId>com.atomikos</groupId>
        <artifactId>transactions-api</artifactId>
        <version>3.9.3</version>
    </dependency>
    <dependency>
        <groupId>com.atomikos</groupId>
        <artifactId>transactions-jms</artifactId>
        <version>3.9.3</version>
    </dependency>
    <dependency>
        <groupId>com.atomikos</groupId>
        <artifactId>transactions-jdbc</artifactId>
        <version>3.9.3</version>
    </dependency>
    <dependency>
        <groupId>com.atomikos</groupId>
        <artifactId>transactions-jta</artifactId>
        <version>3.9.3</version>
    </dependency>
    <dependency>
        <groupId>org.apache.activemq</groupId>
        <artifactId>activemq-core</artifactId>
        <version>5.7.0</version>
    </dependency>
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>8.0.14</version>
    </dependency>
</dependencies>
Listing 13-2

Maven Dependencies for the Distributed Transactions Example Using MySQL and ActiveMQ (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\pom.xml)

These library dependencies will be clearer when you look at the configuration in detail. Before you do that, let’s inspect the main components in the architecture.

You will first look at the Quotes Processor Task, which is a Quartz scheduler that will timeout at configured intervals so the processNewQuotes() will get triggered at definite intervals. See Listing 13-3.
public class QuotesProcessorTask {
@Autowired
@Qualifier("brokerServiceRequired_TX")
BrokerService brokerServiceRequired_TX;
    public void processNewQuotes() {
        List<QuoteDTO> newQuotes = brokerServiceRequired_TX.findNewQuotes();
        newQuotes.forEach(item->{
            if(((QuoteDTO) item).getStatus().equals(Quote.NEW)){
                brokerServiceRequired_TX.processNewQuote(
                     ((QuoteDTO) item).getId());
            }
        });
    }
}
Listing 13-3

Scheduled Task for Processing New Quotes (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\java\com\acme\ecom\schedule\QuotesProcessorTask.java)

If “New” quotes are found, the scheduler invokes the processNewQuote() method of the broker service, once for each new quote, passing the ID of the new quote found. Note that this method invocation happens within a transaction, the configuration of which you will see soon. You will look at the broker service next in Listing 13-4.
public class BrokerServiceImpl implements BrokerService{
private static volatile boolean flipFlop = false;
    @Autowired
    @Qualifier("auctionServiceRequired_TX")
    AuctionService auctionServiceRequired_TX;
    @Autowired
    @Qualifier("stockOrderServiceRequired_TX")
    StockOrderService stockOrderServiceRequired_TX;
    @Autowired
    @Qualifier("auctionServiceRequiresNew_TX")
    AuctionService auctionServiceRequiresNew_TX;
    @Autowired
    @Qualifier("stockOrderServiceRequiresNew_TX")
    StockOrderService stockOrderServiceRequiresNew_TX;
    private static synchronized void flipFlop() throws QuotesBaseException{
        if(flipFlop){
            flipFlop = false;
        }
        else{
            flipFlop = true;
        }
        if(flipFlop){
            throw new QuotesBaseException("Explicitly thrown by Broker
                Application to Roll Back!");
        }
    }
    @Override
    public List<QuoteDTO> findNewQuotes(){
        List<QuoteDTO> newQuotes = auctionServiceRequired_TX.findNewQuotes();
        return newQuotes;
    }
    @Override
    public void processNewQuote(Long id)throws QuotesBaseException{
        Optional<QuoteDTO> quoteQueried =
            auctionServiceRequired_TX.findQuoteById(id);
        QuoteDTO quoteDTO = (QuoteDTO) quoteQueried.get();
        Integer testCase = quoteDTO.getTest();
        If((testCase == 1)  || (testCase == 5) || (testCase == 6)
                || (testCase == 7)){
            auctionServiceRequired_TX.confirmQuote(quoteDTO);
            stockOrderServiceRequired_TX.sendOrderMessage(quoteDTO);
        }
        else if(testCase == 2){
            auctionServiceRequired_TX.confirmQuote(quoteDTO);
            stockOrderServiceRequired_TX.sendOrderMessage(quoteDTO);
            flipFlop();
        }
        else if(testCase == 3){
            auctionServiceRequired_TX.confirmQuote(quoteDTO);
            try{
                stockOrderServiceRequiresNew_TX.sendOrderMessage(quoteDTO);
            }
            catch(QuotesMessageRollbackException
                    quotesMessageRollbackException){
                LOGGER.error(quotesMessageRollbackException.getMessage());
            }
        }
        else if(testCase == 4){
            try{
                auctionServiceRequiresNew_TX.confirmQuote(quoteDTO);
            }
            catch(QuotesConfirmRollbackException
                    quotesConfirmRollbackException){
                LOGGER.error(quotesConfirmRollbackException.getMessage());
            }
            stockOrderServiceRequired_TX.sendOrderMessage(quoteDTO);
        }
        else if(testCase == 8){
            try{
                auctionServiceRequiresNew_TX.confirmQuote(quoteDTO);
                // PROPAGATION_REQUIRES_NEW Because, during next time out
                // of QuotesProcessorTask we shouldn't fetch this quote
            }
            catch(QuotesConfirmRollbackException
                    quotesConfirmRollbackException){
                LOGGER.error(quotesConfirmRollbackException.getMessage());
            }
            stockOrderServiceRequired_TX.sendOrderMessage(quoteDTO);
        }
        else{
            LOGGER.debug("Undefined Test Case");
        }
    }
}
Listing 13-4

Broker Service Coordinating New Quote Processing (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\java\com\acme\ecom\service\BrokerServiceImpl.java)

You auto-configure the references to the auction service and the stock order service within the broker service. You have two references of each of these services; this is just a hack to arrange the fixtures for the various test scenarios you are going to validate. To make it clear, you have two references of auction service with the names auctionServiceRequired_TX and auctionServiceRequiresNew_TX. As the name implies, the write methods in auctionServiceRequired_TX get executed in a PROPAGATION_REQUIRED context whereas those in auctionServiceRequiresNew_TX get executed in a PROPAGATION_REQUIRES_NEW context. The rest of the code checks the incoming parameter where you have piggy-backed an indicator (an integer value) to identify which test scenario (test case) you are executing. The general strategy is to use PROPAGATION_REQUIRED if you want to propagate transactions between parent and child services and to use PROPAGATION_REQUIRES_NEW wherever you explicitly generate error conditions so that the error is confined to the context of that respective service method alone, that too in a controlled manner (by using try-catch) and the rest of the execution flow can happen; again, this is to facilitate the fixture for the demonstration purposes.

You need to be aware that, in a real production scenario, you do not want the arrangement of the test scenarios; so if you want atomic operations across services, you just configure everything using PROPAGATION_REQUIRED, the (testCase == 1) scenario alone.

There is another utility method called flipFlop() 6 defined in the broker service. This method flip-flops the simulation of error creation during an operation between two consecutive executions, so if you have simulated an error in one execution of the method, the next execution of the same method will not simulate the error. In this way, if you have executed a test case to validate for an error scenario in one pass, you will also be able to see how the invariants will be if the same flow is executed without that error scenario. This is useful if you want to see if the message consumptions fails between the first pass so that when an attempt to reconsume happens, it will happen without failure in the next pass; in that manner, it is easy for you to test and visualize the results. Leave this here for the time being; it will be clearer when you execute the tests.

Next, look at the auction service shown in Listing 13-5.
@Service
public class AuctionServiceImpl implements AuctionService{
    @Autowired
    private QuoteRepository quoteRepository;
    @Override
    public QuoteDTO confirmQuote(QuoteDTO quoteDTO)
            throws QuotesConfirmRollbackException{
        Integer testCase = quoteDTO.getTest();
        Optional quoteQueried = quoteRepository.findById(quoteDTO.getId());
        Quote quote = null;
        Quote quoteSaved = null;
        if(quoteQueried.isPresent()){
            quote = (Quote) quoteQueried.get();
            quote.setStatus(Quote.CONFIRMED);
            quote.setUpdatedAt(new Date());
            quoteSaved = quoteRepository.save(quote);
        }
        if(testCase == 4){
            flipFlop();
        }
        return getQuoteDTOFromQuote((Quote) quoteQueriedAgain.get());
    }
}
Listing 13-5

Auction Service Confirming New Quotes into MySQL DB (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\java\com\acme\ecom\service\AuctionServiceImpl.java)

The auction service changes the status of a new quote to “Confirmed.”

The broker service, after invoking the auction service, will call the stock order service. The stock order service is a JMS message sender and is shown in Listing 13-6.
public class StockOrderServiceImpl implements StockOrderService{
    private JmsTemplate jmsTemplate;
    public void setJmsTemplate(JmsTemplate jmsTemplate) {
        this.jmsTemplate = jmsTemplate;
    }
    public void sendOrderMessage(final QuoteDTO quoteDTO)
            throws QuotesMessageRollbackException{
        Integer testCase = quoteDTO.getTest();
        jmsTemplate.send(new MessageCreator() {
            public Message createMessage(Session session) throws JMSException {
                return session.createObjectMessage(quoteDTO);
            }
        });
        if(testCase == 3){
            throw new QuotesMessageRollbackException(
                "Explicitly thrown by Message Sender to Roll Back!");
        }
    }
}
Listing 13-6

Stock Order Service Sending Messages to ActiveMQ Against Confirmed Quotes (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\java\com\acme\ecom\messaging\StockOrderServiceImpl.java)

For the sake of completeness of the code, Listing 13-7 shows the Quote model class.
@Entity
@Table(name = "quote")
@Data
@EqualsAndHashCode(exclude = { "id" })
public class Quote{
    public static final String NEW = "New";
    public static final String CONFIRMED = "Confirmed";
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private Long id;
    @NotBlank
    @Column(name = "symbol", nullable = false, updatable = false)
    private String symbol;
    @Column(name = "sellerid", nullable = false, updatable = false)
    private Long sellerId;
    @Column(name = "buyerid", nullable = false, updatable = false)
    private Long buyerId;
    @Column(name = "amount", nullable = false, updatable = false)
    private Float amount;
    @Column(name = "status", nullable = false, updatable = true)
    private String status;
    @Column(name = "test", nullable = true, updatable = true)
    private Integer test;
    @Column(name = "delay", nullable = true, updatable = true)
    private Integer delay = 0;
    @Column(name = "createdat", nullable = true, updatable = false)
    @Temporal(TemporalType.TIMESTAMP)
    private Date createdAt;
    @Column(name = "updatedat", nullable = true, updatable = true)
    @Temporal(TemporalType.TIMESTAMP)
    private Date updatedAt;
}
Listing 13-7

A Quote Entity (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\java\com\acme\ecom\model\quote\Quote.java)

That’s all of the main Java files for the example. The main configuration of wiring the above classes with the Atomikos XA transaction manager is done in the file spring-sender-mysql.xml, shown in Listing 13-8.
<beans>
    <bean id="stockOrderTarget"
            class="com.acme.ecom.messaging.StockOrderServiceImpl">
        <property name="jmsTemplate" ref="jmsTemplate"/>
    </bean>
    <bean id="stockOrderServiceRequired_TX"class="org.springframework.
            transaction.interceptor.TransactionProxyFactoryBean">
        <property name="transactionManager">
            <ref bean="transactionManager" />
        </property>
        <property name="target"><ref bean="stockOrderTarget"  /></property>
        <property name="transactionAttributes">
            <props>
                <prop key="sendOrderMessage∗">
                    PROPAGATION_REQUIRED,
                    -QuotesMessageRollbackException, +QuotesNoRollbackException
                </prop>
            </props>
        </property>
    </bean>
    <bean id="stockOrderServiceRequiresNew_TX" class="org.springframework.
            transaction.interceptor.TransactionProxyFactoryBean">
        <property name="transactionManager">
            <ref bean="transactionManager" />
        </property>
        <property name="target"><ref bean="stockOrderTarget"  /></property>
        <property name="transactionAttributes">
            <props>
                <prop key="sendOrderMessage∗">
                    PROPAGATION_REQUIRES_NEW,
                    -QuotesMessageRollbackException, +QuotesNoRollbackException
                </prop>
            </props>
        </property>
    </bean>
    <bean id="auctionTarget" class= "com.acme.ecom.service.AuctionServiceImpl">
    </bean>
    <bean id="auctionServiceRequired_TX" class="org.springframework.
            transaction.interceptor.TransactionProxyFactoryBean">
        <property name="transactionManager">
            <ref bean="transactionManager" />
        </property>
        <property name="target"><ref bean="auctionTarget"  /></property>
        <property name="transactionAttributes">
            <props>
                <prop key="confirm∗">
                    PROPAGATION_REQUIRED,
                    -QuotesConfirmRollbackException, +QuotesNoRollbackException
                </prop>
                <prop key="find∗">PROPAGATION_SUPPORTS, readOnly</prop>
            </props>
        </property>
    </bean>
    <bean id="auctionServiceRequiresNew_TX" class="org.springframework.
            transaction.interceptor.TransactionProxyFactoryBean">
        <property name="transactionManager"><ref bean="transactionManager" />
        </property>
        <property name="target"><ref bean="auctionTarget"  /></property>
        <property name="transactionAttributes">
            <props>
                <prop key="confirm∗">
                    PROPAGATION_REQUIRES_NEW,
                    -QuotesConfirmRollbackException,
                    +QuotesNoRollbackException</prop>
                <prop key="find∗">PROPAGATION_SUPPORTS, readOnly</prop>
            </props>
        </property>
    </bean>
    <bean id="brokerTarget" class= "com.acme.ecom.service.BrokerServiceImpl">
    </bean>
    <bean id="brokerServiceRequired_TX" class="org.springframework.
            transaction.interceptor.TransactionProxyFactoryBean">
        <property name="transactionManager">
            <ref bean="transactionManager" />
        </property>
        <property name="target"><ref bean="brokerTarget"  /></property>
        <property name="transactionAttributes">
            <props>
                <prop key="process∗">
                    PROPAGATION_REQUIRED,
                    -QuotesBaseException, +QuotesNoRollbackException
                </prop>
                <prop key="find∗">PROPAGATION_SUPPORTS, readOnly</prop>
            </props>
        </property>
    </bean>
</beans>
Listing 13-8

Spring Wiring for Distributed Transactions Example Using MySQL, ActiveMQ, and Atomikos (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\resources\spring-sender-mysql.xml)

You touched on local transactions in the “Distributed Transactions Revisited” section earlier and Spring’s PlatformTransactionManager supports local transactions in JDBC, JMS, AMQP, Hibernate, JPA, JDO, and many others usage scenarios. When you want to use XA transactions, org.springframework.transaction.jta.JtaTransactionManager can be used. JtaTransactionManager is a PlatformTransactionManager implementation for JTA, delegating to a back-end JTA provider. This is typically used to delegate to a Java EE server’s transaction coordinator like Oracle WebLogicJtaTransactionManager or IBM WebSphereUowTransactionManager , but may also be configured with a local JTA provider that is embedded within the application like Atomikos. Embedded transaction managers give you the freedom of not using a full-fledged app server; you still get much of the XA transaction support similar to what you get from an app server in your standalone JVM applications.

In the current example, there are operations in two resource managers, which are to be transactional:
  • AuctionService.confirmQuote()

  • StockOrderService.sendOrderMessage()

The first one uses a JPA repository to merge an entity’s changed state to the underlying repository and subsequently to the backing persistent store. The second one uses JMS to send messages to an ActiveMQ queue. You want to make both above operations atomic, so either both of them should succeed or both of them should roll back. To ease the control, you enclose both above transactional methods within a third method:
  • BrokerService.processNewQuote()

You now leverage org.springframework.transaction.interceptor. TransactionProxyFactoryBean , designed to cover the typical use case of declarative transaction demarcation, by wrapping a singleton target object with a transactional proxy, proxying all the interfaces that the target implements. This is done in the XML configuration. You then also configure the transaction type required for all three methods:
  • AuctionService.confirmQuote()
    • <prop key="confirm*">PROPAGATION_REQUIRED</prop>

  • StockOrderService.sendOrderMessage()
    • <prop key="sendOrderMessage*">PROPAGATION_REQUIRED</prop>

  • BrokerService.processNewQuote()
    • <prop key="process*">PROPAGATION_REQUIRED</prop>

Further, the recommended way to indicate to the Spring Framework’s transaction infrastructure that a transaction’s work is to be rolled back is to throw an exception from code that is currently executing in the context of a transaction. The Spring Framework’s transaction infrastructure code will catch any unhandled exception as it bubbles up the call stack and will mark the transaction for rollback.

That’s all you want to do to wire the business components. Now, if there is an error within any of the above three methods, it will roll back the state changes affected by all three business methods in the underlying resource managers taking part in the XA transaction.

You can now concentrate on the infrastructure plumbing. There are two resources you need to configure, the JMS resource and the JDBC resource. See Listing 13-9 for the XA JMS resource.
<beans>
    <bean id="XaFactory"
            class="org.apache.activemq.ActiveMQXAConnectionFactory">
        <property name="brokerURL" value="tcp://127.0.0.1:61616"/>
    </bean>
    <bean id="connectionFactory"
            class="com.atomikos.jms.AtomikosConnectionFactoryBean”>
        <property name="uniqueResourceName" value="JMS-Producer"/>
        <property name="xaConnectionFactory" ref="XaFactory"/>
        <property name="localTransactionMode" value="false"/>
    </bean>
    <bean id="jmsTemplate"
            class="org.springframework.jms.core.JmsTemplate">
        <property name="connectionFactory" ref="connectionFactory"/>
        <property name="defaultDestinationName"
            value="notification.queue"/>
        <property name="deliveryPersistent" value="true"/>
        <property name="sessionTransacted" value="true"/>
        <property name="sessionAcknowledgeMode" value="0"/>
    </bean>
</beans>
Listing 13-9

Spring Wiring for XA JMS Resources for the Distributed Transactions Example ActiveMQ Atomikos (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\resources\spring-sender-mysql.xml)

The JmsTemplate used by StockOrderService uses com.atomikos.jms.AtomikosConnectionFactoryBean, which is the Atomikos’s JMS 1.1 connection factory for JTA-enabled JMS operations. You can use an instance of this class to make JMS participate in JTA transactions without having to issue the low-level XA calls yourself. You set whether local transactions are desired by using the localTransactionMode property , which defaults to false. With local transactions, no XA enlist will be done; rather, the application should perform session-level JMS commits or rollbacks instead. Note that this feature also requires support from your JMS provider, ActiveMQ in your case. The xaConnectionFactory property is the basic connection factory that encapsulates the plumbing for connecting to the ActiveMQ broker. In your case, the bean is an instance of ActiveMQXAConnectionFactory type, which is a special connection factory class that you must use when you want to connect to the ActiveMQ broker with support for XA transactions.

See Listing 13-10 for the XA JDBC resource.
<beans>
    <bean id="datasourceAtomikos-01"
            class="com.atomikos.jdbc.AtomikosDataSourceBean">
        <property name="uniqueResourceName"><value>JDBC-1</value></property>
        <property name="xaDataSource">
            <ref bean="xaDataSourceMySQL-01"  />
        </property>
        <property name="xaProperties">
            <props>
                <prop key="maxPoolSize">4</prop>
                <prop key="uniqueResourceName">xads1</prop>
            </props>
        </property>
        <property name="poolSize"><value>4</value></property>
    </bean>
    <bean id="xaDataSourceMySQL-01"
            class="com.mysql.cj.jdbc.MysqlXADataSource">
        <property name="url">
            <value>jdbc:mysql://localhost:3306/ecom01</value>
        </property>
        <property name="pinGlobalTxToPhysicalConnection">
            <value>true</value>
        </property>
        <property name="user"><value>root</value></property>
        <property name="password"><value>rootpassword</value></property>
    </bean>
    <bean id="atomikosTransactionManager"
            class="com.atomikos.icatch.jta.UserTransactionManager">
        <property name="forceShutdown"><value>true</value></property>
    </bean>
    <bean id="atomikosUserTransaction"
            class="com.atomikos.icatch.jta.UserTransactionImp">
        <property name="transactionTimeout"><value>300</value></property>
    </bean>
    <bean id="transactionManager"
            class="org.springframework.transaction.jta.JtaTransactionManager">
        <property name="transactionManager">
            <ref bean="atomikosTransactionManager"  />
        </property>
        <property name="userTransaction">
            <ref bean="atomikosUserTransaction"  />
        </property>
    </bean>
    <bean id="springJtaPlatformAdapter"
            class="com.acme.ecom.AtomikosJtaPlatform">
        <property name="jtaTransactionManager" ref="transactionManager" />
    </bean>
    <bean id="hibernateJpaVendorAdapter" class=
            "org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter"/>
    <bean id="quoteEntityManager"class="org.springframework.
            orm.jpa.LocalContainerEntityManagerFactoryBean">
        <property name="dataSource" ref="datasourceAtomikos-01"/>
        <property name="jpaVendorAdapter" ref="hibernateJpaVendorAdapter"/>
        <property name="jpaProperties">
            <props>
                <prop key="hibernate.transaction.jta.platform">
                    com.acme.ecom.AtomikosJtaPlatform
                </prop>
                <prop key="javax.persistence.transactionType">JTA</prop>
            </props>
        </property>
        <property name="packagesToScan" value="com.acme.ecom.model.quote"/>
        <property name="persistenceUnitName" value="quotePersistenceUnit" />
    </bean>
    <jpa:repositories base-package="com.acme.ecom.repository.quote"
        entity-manager-factory-ref="quoteEntityManager"/>
</beans>
Listing 13-10

Spring Wiring for XA JDBC Resources for the Distributed Transactions Example Using MySQL and Atomikos (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\resources\spring-sender-mysql.xml)

You can use an instance of the com.atomikos.jdbc. AtomikosDataSourceBean class if you want to use Atomikos JTA-enabled connection pooling. You need to construct an instance and set the required properties, and the resulting bean will automatically register with the transaction service and take part in active transactions. All SQL done over connections received from this class will participate in JTA transactions. Starting with Connector/J 5.0.0, the javax.sql.XADataSource interface is implemented using the com.mysql.jdbc.jdbc2.optional.MysqlXADataSource class , which supports XA distributed transactions when used in combination with MySQL server version 5.0 and later. You set an instance of MysqlXADataSource as the xaDataSource property for the AtomikosDataSourceBean class.

Next, the LocalContainerEntityManagerFactoryBean is Spring’s FactoryBean that creates a JPA EntityManagerFactory, which can then be passed to JPA-based DAOs via dependency injection. Starting with Spring 3.1, the persistence.xml is no longer necessary and LocalContainerEntityManagerFactoryBean supports a packagesToScan property where the packages to scan for @Entity classes can be specified.

To teach Hibernate how to participate in the Atomikos transaction, earlier you must set a property, the hibernate.transaction.manager_lookup_class . However, Hibernate 4.3 removed the long-deprecated TransactionManagerLookup. Hibernate4 doesn’t know to work with Atomikos out of the box. The JTA provider must implement org.hibernate.engine.transaction.jta.platform.spi.JtaPlatform to resolve the JTA UserTransaction and TransactionManager from the Spring-configured JtaTransactionManager implementation. An abstract implementation of JTA Platform is already available within Hibernate, namely org.hibernate.engine.transaction.jta.platform.internal.AbstractJtaPlatform. Using it makes writing a JTA Platform for Atomikos a breeze. You need to implement this class and set the hibernate.transaction.jta.platform property explicitly. See Listing 13-11.
import javax.transaction.TransactionManager;
import javax.transaction.UserTransaction;
import org.hibernate.engine.transaction.jta.platform.internal.AbstractJtaPlatform;
import org.springframework.transaction.jta.JtaTransactionManager;
@SuppressWarnings("serial")
public class AtomikosJtaPlatform extends AbstractJtaPlatform {
    static TransactionManager transactionManager;
    static UserTransaction userTransaction;
    @Override
    protected TransactionManager locateTransactionManager() {
        Assert.notNull(transactionManager,
            "TransactionManager has not been setted");
        return transactionManager;
    }
    @Override
    protected UserTransaction locateUserTransaction() {
        Assert.notNull(userTransaction, "UserTransaction has not been setted");
        return userTransaction;
    }
    public void setJtaTransactionManager(JtaTransactionManager
            jtaTransactionManager) {
        transactionManager = jtaTransactionManager.getTransactionManager();
        userTransaction = jtaTransactionManager.getUserTransaction();
    }
    public void setTransactionManager(TransactionManager transactionManager) {
        this.transactionManager = transactionManager;
    }
    public void setUserTransaction(UserTransaction userTransaction) {
        this.userTransaction = userTransaction;
    }
}
Listing 13-11

Resolving JTA UserTransaction and TransactionManager (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\java\com\acme\ecom\AtomikosJtaPlatform.java)

Having looked at the overall configurations, let’s also look into more details of the infrastructure configurations with reference to Listing 13-9 and Listing 13-10.

You need to set the dataSource property of LocalContainerEntityManagerFactoryBean to a JDBC DataSource that the JPA persistence provider is supposed to use for accessing the database. The DataSource passed in here will be used as a nonJtaDataSource on the PersistenceUnitInfo passed to the PersistenceProvider. Note that this variant typically works for JTA transaction management as well; if it does not, consider using the explicit property jtaDataSource instead. You use the AtomikosDataSourceBean explained previously to set this property.

Next, you need a service provider interface implementation that allows you to plug in vendor-specific behavior into Spring’s EntityManagerFactory creators. HibernateJpaVendorAdapter is an implementation for Hibernate EntityManager. It supports the detection of annotated packages along with other things. The JPA module of Spring Data contains a custom namespace, <jpa:repositories, that allows the defining of repository beans. It also contains certain features and element attributes that are special to JPA. Generally, the JPA repositories can be set up using the repositories element. In this example, Spring is instructed to scan com.acme.ecom.repository.quote and all its subpackages for interfaces extending Repository or one of its subinterfaces. For each interface found, the infrastructure registers the persistence technology-specific FactoryBean to create the appropriate proxies to handle invocations of the query methods. Each bean is registered under a bean name that is derived from the interface name, so your interface of QuoteRepository would be registered under quoteRepository. The entity-manager-factory-ref will help to explicitly wire the EntityManagerFactory to be used with the repositories being detected by the repositories element. This property is usually used if multiple EntityManagerFactory beans are used within the application. If not explicitly configured, Spring will automatically look up the single EntityManagerFactory configured in the ApplicationContext.

This completes the configuration of both XA resources. Next, you must configure the XA transaction manager.

You need org.springframework.transaction.jta. JtaTransactionManager , which is an implementation of PlatformTransactionManager. You need to provide implementations of javax.transaction. TransactionManager and javax.transaction.UserTransaction to create a JtaTransactionManager.

The javax.transaction.UserTransaction interface provides the application the ability to control transaction boundaries programmatically. This interface may be used by Java client programs or Enterprise Java Beans. The UserTransaction.begin() method starts a global transaction and associates the transaction with the calling thread. The transaction-to-thread association is managed transparently by the transaction manager. Thus the UserTransaction is the user-facing API.

com.atomikos.icatch.jta. UserTransactionManager is a straightforward, zero-setup implementation of javax.transaction.TransactionManager. Standalone Java applications can use an instance of this class to get a handle to the transaction manager and automatically start up or recover the transaction service on first use. com.atomikos.icatch.jta. UserTransactionImp is the javax.transaction.UserTransaction implementation from Atomikos which you can use for standalone Java applications. This class again automatically starts up and recovers the transaction service on first use.

org.springframework.transaction.jta.JtaTransactionManager is a PlatformTransactionManager implementation for JTA, delegating to back-end JTA providers. It is typically used to delegate to a Java EE server’s transaction coordinator, but in your case, you have configured with Atomikos JTA provider, which is embedded within the application.

Microservice 2: Broker-Web

This microservice is straightforward, with REST controllers. You can invoke the test cases by sending HTTP requests to this microservice. You can also keep watching the dashboard, which will provide you with view for the Quotes table so that you can visualize the effect of your test. Figure 13-6 shows how various interactions can be done with the Broker Web microservice .
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig6_HTML.jpg
Figure 13-6

The Broker Web console

Microservice 3: Quote Settlement (Settlement-ActiveMQ-Derby)

Visit pom.xml to see the Atomikos distributed transaction manager dependency. You have all of the dependencies described already in Microservice 2, the Broker-MySQL-ActiveMQ microservice. However, as depicted in the architecture diagram, you are using a Derby database in place of MySQL for the downstream settlement of quotes. I will not repeat the dependencies you have already seen; Listing 13-12 shows the dependency for the Derby client alone.
<dependencies>
    <dependency>
        <groupId>org.apache.derby</groupId>
        <artifactId>derbyclient</artifactId>
        <version>10.14.1.0</version>
    </dependency>
</dependencies>
Listing 13-12

Maven Dependency for the Derby Client (ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\pom.xml)

My intention with using a Derby database in place of MySQL is just to make you, the reader, comfortable using XA transactions with an additional XA-compliant resource, a Derby database, so that as a subsequent exercise you have all the tools to try for yourself XA transactions across two XA-compliant databases within a single XA transactions. But you will not attempt that exercise in this text, since it is outside the scope of our discussion.

When a new quote message reaches the ActiveMQ queue, it will be picked up instantaneously by the settlement listener. The settlement listener is a JMS listener and it consumes messages from the queue in a transaction. See Listing 13-13.
public class SettlementListener implements MessageListener {
    @Autowired
    @Qualifier("quotesReconcileServiceRequired_TX")
    QuotesReconcileService quotesReconcileServiceRequired_TX;
    @Autowired
    @Qualifier("quotesReconcileServiceRequiresNew_TX")
    QuotesReconcileService quotesReconcileServiceRequiresNew_TX;
    public void onMessage(Message message) {
        try {
            reconcile((QuoteDTO) objectMessage.getObject());
        }
        catch (JMSException e) {
            throw new RuntimeException(e);
        }
        catch (QuotesBaseException e) {
            throw new RuntimeException(e);
        }
    }
    private void reconcile(QuoteDTO quoteDTO)throws QuotesBaseException{
        Integer testCase = quoteDTO.getTest();
        if(testCase.equals(1) || testCase.equals(2) || testCase.equals(4) ||
                testCase.equals(8)){
            quotesReconcileServiceRequired_TX.reconcile(quoteDTO);
        }
        else if(testCase.equals(5)){
            try{
                quotesReconcileServiceRequiresNew_TX.reconcile(quoteDTO);
            }
            catch(QuotesBaseException quotesBaseException){
                LOGGER.error(quotesBaseException.getMessage());
            }
                flipFlop();
        }
        else if(testCase.equals(6)){
            try{
                quotesReconcileServiceRequiresNew_TX.reconcile(quoteDTO);
            }
            catch(QuotesBaseException quotesBaseException){
                LOGGER.error(quotesBaseException.getMessage());
            }
        }
        else if(testCase.equals(7)){
            try{
                quotesReconcileServiceRequired_TX.reconcile(quoteDTO);
            }
            catch(QuotesBaseException quotesBaseException){
                LOGGER.error(quotesBaseException.getMessage());
            }
            flipFlop();
        }
        else{
            LOGGER.debug("Undefined Test Case");
        }
    }
}
Listing 13-13

Transacted Message Listener Orchestrating Message Consumtion and Quotes Settlement (ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\src\main\java\com\acme\ecom\messaging\SettlementListener.java)

On consuming the message, the settlement listener will invoke the QuotesReconcileService.reconcile(QuoteDTO) method to reconcile and settle the quote, again in a transaction. See Listing 13-14.
public class QuotesReconcileServiceImpl implements QuotesReconcileService{
    @Autowired
    private UserRepository userRepository;
    @Override
    public void reconcile(QuoteDTO quoteDTO)throws QuotesBaseException{
        Integer testCase = quoteDTO.getTest();
        Optional sellerQueried =
            userRepository.findById(quoteDTO.getSellerId());
        Optional buyerQueried  =
            userRepository.findById(quoteDTO.getBuyerId());
        User seller = null;
        User buyer = null;
        User sellerSaved = null;
        User buyerSaved = null;
        Date updatedDate = null;
        seller = (User) sellerQueried.get();
        buyer  = (User) buyerQueried.get();
        updatedDate = new Date();
        seller.setAmountSold(seller.getAmountSold() + quoteDTO.getAmount());
        seller.setUpdatedAt(updatedDate);
        seller.setLastQuoteAt(quoteDTO.getCreatedAt());
        sellerSaved = userRepository.save(seller);
        buyer.setAmountBought(buyer.getAmountBought() + quoteDTO.getAmount());
        buyer.setUpdatedAt(updatedDate);
        buyer.setLastQuoteAt(quoteDTO.getCreatedAt());
        buyerSaved = userRepository.save(buyer);
        if(testCase.equals(6)){
            throw new QuotesReconcileRollbackException(
                "Explicitly thrown by Reconcile Application to Roll Back!");
        }
        if(testCase.equals(7)){
            flipFlop();
        }
    }
}
Listing 13-14

Quotes Reconcile Service Doing Settlement Reconciliation (ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\src\main\java\com\acme\ecom\service\QuotesReconcileServiceImpl.java)

During reconciliation, you update the amountSold and the amountBought attributes of the Seller and the Buyer respectively, thus these two attributes represent the running totals as shown in code Listing 13-14. For the completeness of the discussion, Listing 13-15 shows the User entity.
@Entity
@Table(name = "stockuser")
@Data
@EqualsAndHashCode(exclude = { "id" })
public class User {
    @Id
    @Column(name = "id", nullable = false, updatable = false)
    private Long id;
    @Column(name = "name", nullable = false, updatable = false)
    private String name;
    @Column(name = "amountsold", nullable = true, updatable = true)
    private Double amountSold;
    @Column(name = "amountbought", nullable = true, updatable = true)
    private Double amountBought;
    @Column(name = "lastquoteat", nullable = true, updatable = true)
    @Temporal(TemporalType.TIMESTAMP)
    private Date lastQuoteAt;
    @Column(name = "createdat", nullable = false, updatable = false)
    @Temporal(TemporalType.TIMESTAMP)
    private Date createdAt;
    @Column(name = "updatedat", nullable = false, updatable = true)
    @Temporal(TemporalType.TIMESTAMP)
    private Date updatedAt;
}
Listing 13-15

User Entity (ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\src\main\java\com\acme\ecom\model\user\User.java)

That’s all the main Java files for the microservice. The main configuration of wiring the above classes with the Atomikos XA transaction manager is done in the file spring-listener-derbyl.xml, shown in Listing 13-16.
<beans>
    <bean id="datasourceAtomikos-02"
            class="com.atomikos.jdbc.AtomikosDataSourceBean>
        <property name="uniqueResourceName"><value>JDBC-2</value></property>
        <property name="xaDataSourceClassName"
            value="org.apache.derby.jdbc.ClientXADataSource" />
        <property name="xaProperties">
            <props>
                <prop key="databaseName">
                    D:/Applns/apache/Derby/derbydb/exampledb
            </prop>
            <prop key="serverName">localhost</prop>
            <prop key="portNumber">1527</prop>
        </props>
        </property>
    </bean>
    <bean id="atomikosTransactionManager"
            class="com.atomikos.icatch.jta.UserTransactionManager">
        <property name="forceShutdown"><value>true</value></property>
    </bean>
    <bean id="atomikosUserTransaction"
            class="com.atomikos.icatch.jta.UserTransactionImp">
        <property name="transactionTimeout"><value>300</value></property>
    </bean>
    <bean id="transactionManager"
            class="org.springframework.transaction.jta.JtaTransactionManager">
        <property name="transactionManager">
            <ref bean="atomikosTransactionManager"  />
        </property>
        <property name="userTransaction">
            <ref bean="atomikosUserTransaction"  />
        </property>
    </bean>
    <bean id="springJtaPlatformAdapter"
            class="com.acme.ecom.AtomikosJtaPlatform">
        <property name="jtaTransactionManager" ref="transactionManager" />
    </bean>
    <bean id="hibernateJpaVendorAdapter" class="org.springframework.
            orm.jpa.vendor.HibernateJpaVendorAdapter"/>
    <bean id="userEntityManager" class="org.springframework.
            orm.jpa.LocalContainerEntityManagerFactoryBean">
        <property name="dataSource" ref="datasourceAtomikos-02"/>
        <property name="jpaVendorAdapter" ref="hibernateJpaVendorAdapter"/>
        <property name="jpaProperties">
            <props>
                <prop key="hibernate.transaction.jta.platform">
                    com.acme.ecom.AtomikosJtaPlatform
                </prop>
                <prop key="javax.persistence.transactionType">JTA</prop>
            </props>
        </property>
        <property name="packagesToScan" value="com.acme.ecom.model.user"/>
        <property name="persistenceUnitName" value="userPersistenceUnit" />
    </bean>
    <jpa:repositories base-package="com.acme.ecom.repository.user"
        entity-manager-factory-ref="userEntityManager"/>
    <bean id="XaFactory"
            class="org.apache.activemq.ActiveMQXAConnectionFactory">
        <property name="brokerURL"
            value="failover:(tcp://127.0.0.1:61616)?timeout=10000"/>
    </bean>
    <bean id="connectionFactory"
            class="com.atomikos.jms.AtomikosConnectionFactoryBean”>
        <property name="uniqueResourceName" value="JMS-Consumer"/>
        <property name="xaConnectionFactory" ref="XaFactory"/>
        <property name="localTransactionMode" value="false"/>
    </bean>
    <bean id="notificationListenerContainer" class="org.springframework.
            jms.listener.DefaultMessageListenerContainer">
        <property name="messageListener" ref="notificationListener"/>
        <property name="receiveTimeout" value="10000"/>
        <property name="connectionFactory" ref="connectionFactory"/>
        <property name="destinationName" value="notification.queue"/>
        <property name="transactionManager" ref="transactionManager"/>
        <property name="sessionTransacted" value="true"/>
        <property name="sessionAcknowledgeMode" value="0"/>
    </bean>
    <bean id="notificationListener"
            class="com.acme.ecom.messaging.SettlementListener">
        <property name="quotesReconcileServiceRequired_TX">
            <ref bean="quotesReconcileServiceRequired_TX"  />
        </property>
        <property name="quotesReconcileServiceRequiresNew_TX">
            <ref bean="quotesReconcileServiceRequiresNew_TX"  />
        </property>
    </bean>
    <bean id="quotesReconcileServiceTarget"
        class="com.acme.ecom.service.QuotesReconcileServiceImpl">
    </bean>
    <bean id="quotesReconcileServiceRequired_TX" class="org.springframework.
            transaction.interceptor.TransactionProxyFactoryBean">
        <property name="transactionManager">
            <ref bean="transactionManager" />
        </property>
        <property name="target">
            <ref bean="quotesReconcileServiceTarget"  />
        </property>
        <property name="transactionAttributes">
        <props>
            <prop key="reconcile∗">
                PROPAGATION_REQUIRED,
                -QuotesReconcileRollbackException,+QuotesNoRollbackException
            </prop>
            <prop key="find∗">PROPAGATION_SUPPORTS, readOnly</prop>
        </props>
        </property>
    </bean>
    <bean id="quotesReconcileServiceRequiresNew_TX" class="org.springframework.
            transaction.interceptor.TransactionProxyFactoryBean">
        <property name="transactionManager">
            <ref bean="transactionManager" />
        </property>
        <property name="target">
            <ref bean="quotesReconcileServiceTarget"  />
        </property>
        <property name="transactionAttributes">
            <props>
                <prop key="reconcile∗">
                    PROPAGATION_REQUIRES_NEW,
                    -QuotesReconcileRollbackException,
                    +QuotesNoRollbackException
                </prop>
                <prop key="find∗">PROPAGATION_SUPPORTS, readOnly</prop>
            </props>
        </property>
    </bean>
</beans>
Listing 13-16

Spring Wiring for the Distributed Transactions Example Using Derby, ActiveMQ, and Atomikos (ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\src\main\resources\spring-listener-derbyl.xml)

I have already explained all of the configurations above except that of DefaultMessageListenerContainer. DefaultMessageListenerContainer is a message listener container variant that uses plain JMS client APIs, specifically a loop of MessageConsumer.receive() calls that also allow for transactional reception of messages when registered with an XA transaction manager. This is designed to work in a native JMS environment as well as in a Java EE environment. You first set your standard JMS message listener, which is the settlement listener, so that messages can be picked up by the SettlementListener.onMessage().

Microservice 4: Settlement-Web

This microservice is again straightforward, with REST controllers. You can also keep watching the dashboard, which will provide you with view for the User table so that you can visualize the effect of your test. You will also use this microservice to create a few initial users in the system for test purposes. Figure 13-7 shows how various interactions can be done with the Settlement Web microservice .
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig7_HTML.jpg
Figure 13-7

The Settlement Web console

There is a junit Test class that simply loads the bean definitions into spring-sender-mysql.xml and puts the main test application in sleep mode so that the scheduler can time out to check for any new quotes received at predefined intervals . See Listing 13-17.
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations="classpath:spring-sender-mysql.xml")
public class BrokerServiceTest {
    @Autowired
    @Qualifier("brokerServiceRequired_TX")
    BrokerService brokerService;
    @Test
    public void testSubmitQuote() throws Exception{
        Thread.sleep(1000 ∗ 60 ∗ 60);
    }
}
Listing 13-17

Main Test Class at the Quote Processor End (ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\test\java\com\acme\ecom\test\BrokerServiceTest.java)

Similarly, there is another junit Test class that simply loads the bean definitions into spring-listener-derby.xml and puts the main test application in sleep mode so that the JMS listener can keep listening to any incoming messages in the ActiveMQ. See Listing 13-18.
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations="classpath:spring-listener-derby.xml")
public class SettlementListenerServiceTest {
    @Test
    public void testSettleQuote() throws Exception{
        Thread.sleep(1000 ∗ 60 ∗ 60);
    }
}
Listing 13-18

Another Test Class at the Quote Settlement End (ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\src\test\java\com\acme\ecom\test\SettlementListenerServiceTest.java)

Build and Test the Example’s Happy Flow

As the first step, you need to bring up the ActiveMQ server. You may want to refer to Appendix F to get started with ActiveMQ.

You configure a queue that will act as the bridge between the upstream and downstream processing. Listing 13-19 gives the configuration of a queue that can be done in activemq.xml.
<beans>
    <broker>
        <destinations>
            <queue physicalName="notification.queue" />
        </destinations>
    </broker>
</beans>
Listing 13-19

The ActiveMQ Queue Configuration (D:\Applns\apache\ActiveMQ\apache-activemq-5.13.3\conf\activemq.xml)

You may now bring up the ActiveMQ Server:
cd D:\Applns\apache\ActiveMQ\apache-activemq-5.13.3\bin
D:\Applns\apache\ActiveMQ\apache-activemq-5.13.3\bin>activemq start
Next, make sure MySQL is up and running. You may want to refer to Appendix H to get started with MySQL.
D:\Applns\MySQL\mysql-5.7.14-winx64\bin>mysqld --console
Now open a MySQL prompt:
D:\Applns\MySQL\mysql-5.7.14-winx64\bin>mysql -u root –p
mysql> use ecom01;
Database changed
mysql>
To start with clean tables, delete any tables with the names you want for your examples:
mysql> drop table quote;
Next, create the table with the schema required for this example:
mysql> create table quote (id BIGINT PRIMARY KEY AUTO_INCREMENT, symbol VARCHAR(4), sellerid BIGINT, buyerid BIGINT, amount FLOAT, status VARCHAR(9), test INTEGER, delay INTEGER, createdat DATETIME, updatedat DATETIME);
Next, make sure the Derby database is up and running in network mode. You may want to refer to Appendix G to get started with Derby.
D:\Applns\apache\Derby\db-derby-10.14.1.0-bin\bin>startNetworkServer
You can also use the Derby ij tool to create the database exampledb and open a connection to an already created database using the embedded driver using the following command:
D:\Applns\apache\Derby\derbydb>ij
ij> connect 'jdbc:derby://localhost:1527/D:/Applns/apache/Derby/derbydb/exampledb;create=false';

Note

You are using the ij tool from a base location different from your Derby installation, assuming that your database is in a location different from that of the Derby installation, which is a best practice. For further details on how to create a new database, refer to Appendix G.

Here again you will start with clean tables. Delete any tables with the names you want for your examples:
ij> drop table stockuser;
Next, create the table with the schema required for this example:
ij> create table stockuser (id bigint not null, amountbought double, amountsold double, createdat timestamp not null, lastquoteat timestamp, name varchar(10) not null, updatedat timestamp not null, primary key (id));
ij> CREATE SEQUENCE hibernate_sequence START WITH 1 INCREMENT BY 1;

This completes the infrastructure required to build and run the example.

Next, there are four microservices to build and get running. You will do this one by one.

Microservice 1: Quote Processing Microservice

See Listing 13-9 to tweak the ActiveMQ-specific configuration in
ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\src\main\resources\spring-sender-mysql.xml
<property name="brokerURL" value="tcp://127.0.0.1:61616"/>
Also, you need to tweak the MySQL-specific configuration:
<bean id="xaDataSourceMySQL-01" class="com.mysql.cj.jdbc.MysqlXADataSource">
    <property name="url">
        <value>jdbc:mysql://localhost:3306/ecom01</value>
    </property>
    <property name="user"><value>root</value></property>
    <property name="password"><value>rootpassword</value></property>
</bean>
Now build and package the executables for the Quote Processing microservice and bring up the scheduled processor. There is a utility script provided that you can easily execute in folder ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ\make.bat:
cd D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ>make
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ>mvn -Dmaven.test.skip=true clean install
Now, the junit test can be run by using the script provided:
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ>run
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-MySQL-ActiveMQ>mvn -Dtest=BrokerServiceTest#testSubmitQuote test

Microservice 2: Broker Web Microservice

First, you want to update the configuration files to suit to your environment:
ch13\ch13-01\XA-TX-Distributed\Broker-Web\src\main\resources\application.properties
server.port=8080
spring.datasource.url = jdbc:mysql://localhost:3306/ecom01?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useSSL=false
spring.datasource.username = root
spring.datasource.password = rootpassword
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5Dialect
spring.jpa.hibernate.ddl-auto = update
spring.freemarker.cache=false
You can now build and package the executables for the Broker Web microservice and bring up the server. There is a utility script provided in folder ch13\ch13-01\XA-TX-Distributed\Broker-Web\make.bat:
cd D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-Web
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-Web>make
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-Web>mvn -Dmaven.test.skip=true clean package
You can run the Spring Boot application, again in more than one way. The straightforward way is to execute the JAR file via the following commands:
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-Web>run
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Broker-Web>java -jar -Dserver.port=8080 .\target\quotes-web-1.0.0.jar

This will bring up the Broker Web Spring Boot Server in port 8080.

You can use a browser (preferably Chrome) and point to the following URL to keep monitoring the processing of all new incoming quotes: http://localhost:8080/.

Microservice 3: Quote Settlement Microservice

See Listing 13-16 to tweak the ActiveMQ-specific configuration in
ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\src\main\resources\spring-listener-derby.xml
<property name="brokerURL" value="failover:(tcp://127.0.0.1:61616)?timeout=10000"/>
Also, you need to tweak the Derby-specific configuration:
<property name="xaProperties">
<props>
    <prop key="databaseName">D:/Applns/apache/Derby/derbydb/exampledb</prop>
    <prop key="serverName">localhost</prop>
    <prop key="portNumber">1527</prop>
    </props>
</property>
Now build and package the executables for the Quote Settlement microservice and bring up the message listener. There is a utility script provided in folder ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby\make.bat:
cd D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby>make
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby>mvn -Dmaven.test.skip=true clean install
Now, the junit test can be run by using the script provided:
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby>run
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-ActiveMQ-Derby>mvn -Dtest=SettlementListenerServiceTest#testSettleQuote test

Microservice 4: Settlement Web Microservice

First, you want to update the configuration files to suit to your environment:
ch13\ch13-01\XA-TX-Distributed\Settlement-Web\src\main\resources\application.properties
server.port=8081
spring.datasource.url=jdbc:derby://localhost:1527/D:/Applns/apache/Derby/derbydb/exampledb;create=false
spring.datasource.initialize=false
spring.datasource.driver-class-name=org.apache.derby.jdbc.ClientDriver
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.DerbyTenSevenDialect
spring.freemarker.cache=false
Now build and package the executables for the Settlement Web microservice and bring up the server. There is a utility script provided in folder ch13\ch13-01\XA-TX-Distributed\Settlement-Web\make.bat:
cd D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-Web
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-Web>make
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-Web>mvn -Dmaven.test.skip=true clean package
You can run the Spring Boot application in more than one way. The straightforward way is to execute the JAR file via the following commands:
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-Web>run
D:\binil\gold\pack03\ch13\ch13-01\XA-TX-Distributed\Settlement-Web>java -jar -Dserver.port=8081 .\target\user-web-1.0.0.jar

This will bring up the Settlement Web Spring Boot Server in port 8080.

You can use a browser (preferably Chrome) and point to the following URL to keep monitoring the running account status of the users and how the account balance changes as each new incoming quote is settled: http://localhost:8081/.

There are altogether eight test cases that you will execute one by one to validate the example. See Figure 13-8.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig8_HTML.jpg
Figure 13-8

Test cases

In the truest sense, there are only three test cases that validate the example and they are the first, second, and the seventh test cases. All other test cases invalidate the example either by simulating various failure conditions or by adjusting the transaction semantics applied to service methods. I do that to make you understand possible error scenarios that can occur in a distributed transactions environment.

In this section, you will look at the first test case, which demonstrates strict ACID compliant transaction in the end-to-end flow. This means either all of the state changes in the resources will be approved or all of them will be rolled back. The rest of the test cases will be executed in the subsequent sections one by one.

As a first step, you will create two test users in the system. Take Postman and create two test users as shown (see Figure 13-9):
http://localhost:8081/api/users
METHOD: POST; BODY: Raw JSON
{ "id" : 11, "name" : "Sam", "amountSold" : 1000.0, "amountBought" : 1000.0 }
{ "id" : 21, "name" : "Joe", "amountSold" : 5000.0, "amountBought" : 5000.0 }
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig9_HTML.jpg
Figure 13-9

Create test users

The moment the test users are created, they will be visible in the settlement console, as shown in Figure 13-10.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig10_HTML.jpg
Figure 13-10

Test user console

To start the first test case, take Postman and create a new quote, as shown in Figure 13-11:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "AAPL", "sellerId" : 11, "buyerId" : 21, "amount" : 100, "test" : 1, "delay" : 2 }
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig11_HTML.jpg
Figure 13-11

Create a new quote

Keep watching the consoles of the broker web service and the settlement web service together. When a new quote is created, as shown in Figure 13-11, it instantaneously gets reflected in the broker web console, as shown in Figure 13-12.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig12_HTML.jpg
Figure 13-12

A new quote arrived for Test Case 1

Within minutes, the quotes will get processed in the upstream microservices and will subsequently get settled in the downstream microservices. Close observation shows that the status of the quote changes from “New” to “Confirmed” in the broker web console and the running balances of the users will get changed by the amount of quotes also in the Settlement Web console, both shown in Figure 13-13.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig13_HTML.jpg
Figure 13-13

Test Case 1 console

You should also note another aspect in Figure 13-13, which will be relevant in Test Case 8, in the “Test the Message Received Out of Order Scenario” section later. The latest quote creation time in the upstream system gets updated as the Last Quote Time for users whenever the respective user account gets updated.

Since you already looked at every detail of the code in the earlier sections, you will not look at what is happening under the hood again in detail. Instead, you will now concentrate on the transaction semantics.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig14_HTML.jpg
Figure 13-14

Test Case 1 transaction semantics

As shown in Figure 13-14, you applied TX_REQUIRED transaction semantics to all service methods. This means either all of the state changes in the resources will be made or all of them will be rolled back. In the current test case, you have not simulated any errors or failure conditions, hence all of the transactions succeeded.

In subsequent test cases you are going to simulate error conditions to better understand possible failure conditions in enterprise scenarios. A thorough understanding of all of these possible failure conditions will arm you with better insight, which is very much required when you design for “true microservices,” where you want to stay away from distributed transactions!

Test the Transaction Rollback Scenario

To test the Transaction Rollback test case, take Postman again and create another new quote, as shown in Figure 13-11:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "AMZN", "sellerId" : 11, "buyerId" : 21, "amount" : 200, "test" : 2, "delay" : 2 }
In Figure 13-3, the upstream processing and the downstream processing are separate. There is no strict “super transaction coordinator” across these two domains. If transaction semantics are applied to Figure 13-3, you get Figure 13-15. So within a domain (upstream alone or downstream alone) you may decide to have coordinated transactions across multiple resources, and when it comes to coordination across those domains you may need to think of the BASE approach.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig15_HTML.jpg
Figure 13-15

Test Case 2 transaction semantics

As shown in Figure 13-15, TX_REQUIRED transaction semantics have been applied to all service methods. This means either all of the state changes in the resources will be made or all of them will be rolled back.

In the first pass of execution when the Quotes Processor Task picks up the new quote for processing, you simulate an error within the broker service. This is marked by an X sign in Figure 13-15. So, even though the corresponding transactions in the auction service and stock order service were ready to commit, as shown by the ✓ sign, they will be rolled back since both of them were called within the context of the parent transaction of the broker service where you simulate the error. This is shown in the Quote Processing microservice’s console in Figure 13-16.

Moreover, since in the first pass the transactions were rolled out, the New Quote message delivery to ActiveMQ is not committed, so no action takes place in the Quote Settlement microservice (see Figure 13-16). This is depicted by a blank circle in the first pass in Figure 13-15 for brevity.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig16_HTML.jpg
Figure 13-16

Test Case 2 processing first pass

Wait until the Quotes Processor Task picks up the previous quote again (since the status is still New) in the next scheduled trigger, which is shown in Figure 13-17.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig17_HTML.jpg
Figure 13-17

New quote arrived for Test Case 2

In the second pass of executing the Quotes Processor Task, it again picks up the new quote corresponding to Test Case 2 for processing. In the second pass, you do not simulate any error conditions, as shown by the ✓ sign in Figure 13-15. So, the transactions in the auction service and stock order service that are ready to commit as shown by the ✓ sign are committed, since both of them were called within the context of the parent transaction of the broker service, which is also committed. Since the new quote message delivery to ActiveMQ is committed, the Quote Settlement microservice will receive the quote message for settlement. As shown in Figure 13-15 by the ✓ sign in this second pass for the settlement listener service and the record user transaction service, the complete transactions within the Quote Settlement microservice will be committed. This is reflected in the console in Figure 13-18.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig18_HTML.jpg
Figure 13-18

Test Case 2 console

Caution

As a reader, if you follow the execution of tests mentioned in this chapter exactly, it is easy for you to visualize the effects and understand based on the explanation and screenshots provided here. So you are strongly advised to follow exactly as directed. Further, this setup automatically makes the first pass fail and the second pass succeed for few of the test cases. This mechanism is designed in that manner to make intervention from the reader at a minimum during the test execution. However, the test fixtures will work only if you execute tests cases with a single test client (Postman Browser client). So do not (monkey) test the example with concurrent test clients. Later, when you are comfortable executing the tests as described and can follow the explanations provided, then you may start tweaking the code and test fixtures to test further scenarios of your own.

Simulating Relaxed Base Anomalies

So far you implemented a BASE system with a reasonable degree of ACID transactions so you are sure that eventual consistency is preserved. Now, let’s abuse the transaction attributes to artificially “cut down” the scope of ACID transactions even more, which brings you into the following relaxed BASE situations.

Test the Lost Message Scenario

To test the Lost Message test case, take Postman again and create another new quote:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "GOOG", "sellerId" : 11, "buyerId" : 21, "amount" : 300, "test" : 3, "delay" : 2 }
Figure 13-19 shows the new quote for Test Case 3. It has a status of “New.” When settled, the quote amount, 300, should get added to the respective running balances of the buyer and seller shown in the figure.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig19_HTML.jpg
Figure 13-19

Test Case 3 console at first

Let’s now look at the transaction semantics. See Figure 13-20.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig20_HTML.jpg
Figure 13-20

Test Case 3 transaction semantics

Here, the Quotes Processor Task picks up the new quote for processing. You simulate an error within the stock order service. This is marked by an X sign in Figure 13-19. The peer transaction in the auction service has to succeed, so it’s marked with the ✓ sign. The parent transaction in the broker service also needs to succeed. This means the error in the stock order service shouldn’t affect the parent transaction in the broker service or the peer transaction in auction service, so you put TX_REQUIRES_NEW semantics for the stock order service. The net effect is that the quote will be marked as “Confirmed;” however, the new quote message to ActiveMQ will fail. Since the quote is already marked as “Confirmed,” the Quotes Processor Task will not pick up this quote for processing again in the next scheduled trigger. The net effect is this quote will never get settled in the downstream Quote Settlement microservice, which is shown in Figure 13-21! The message is lost.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig21_HTML.jpg
Figure 13-21

Test Case 3 console at completion

Test the Duplicate Message Sent Scenario

To validate the Duplicate Message Sent test case, take Postman again and create another new quote:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "NFLX", "sellerId" : 11, "buyerId" : 21, "amount" : 400, "test" : 4, "delay" : 2 }
Figure 13-22 shows the new quote for Test Case 4. It has a status of “New.” When settled, the quote amount, 400, should get added to the respective running balances of the buyer and seller, as shown in the figure.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig22_HTML.jpg
Figure 13-22

Test Case 4 console, initially

Let’s now look at the transaction semantics in Figure 13-23.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig23_HTML.jpg
Figure 13-23

Test Case 4 transaction semantics

Here, the Quotes Processor Task picks up the new quote for processing. You simulate an error within the auction service. This is marked by an X sign in Figure 13-23. The peer transaction in the stock order service has to succeed, so it’s marked with a ✓ sign. The parent transaction in the broker service also needs to succeed. This means the error in the auction service shouldn’t affect the parent transaction in the broker service or the peer transaction in the stock order service, so you put TX_REQUIRES_NEW semantics for the auction service. The net effect is that the status of the quote marked as “Confirmed” will be rolled back; however, the new quote message to ActiveMQ will succeed.

Since the new quote message delivery to ActiveMQ is committed, the Quote Settlement microservice will receive the quote message for settlement. As shown in Figure 13-23 by the ✓ sign in the first pass for the settlement listener service and the record user transaction service, the complete transactions within the Quote Settlement microservice will be committed. This is reflected in the console in Figure 13-24.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig24_HTML.jpg
Figure 13-24

Test Case 4 console, intermediate view

Alarmingly enough, you can see that the quote settlement happened and the quote amount of 400 gets added to the respective running balances of the buyer and seller, as shown in Figure 13-24; however, in the upstream systems, the quote is yet not marked as “Confirmed.” This is because the auction service, which marked the quote as “Confirmed,” got rolled back!

The effect is the next time out the Quotes Processor Task picks up the new quote for processing a second time. In your test execution, you won’t simulate the error within the auction service this time. Thus, for a second time, the end-to-end processing of this quote is repeated and Figure 13-25 reflects the result!
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig25_HTML.jpg
Figure 13-25

Test Case 4 console at completion

This is not intended. The duplicate message caused the duplicate settlement of the quote!

Test the Duplicate Message Consumption Scenario

To test the Duplicate Message Consumption test case, take Postman again and create another new quote:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "TSLA", "sellerId" : 11, "buyerId" : 21, "amount" : 500, "test" : 5, "delay" : 2 }
Figure 13-26 shows the new quote for Test Case 5. It has a status of “New.” When settled, the quote amount, 500, should get added to the respective running balances of the buyer and seller, shown in the figure.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig26_HTML.jpg
Figure 13-26

Test Case 5 console, initially

Let’s now look at the transaction semantics in Figure 13-27.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig27_HTML.jpg
Figure 13-27

Test Case 5 transaction semantics

Here, there are no errors simulated in any upstream processing, so the quote status will change from “New” to “Confirmed,” and since the new quote message delivery to ActiveMQ is also successful, the Quote Settlement microservice will receive the quote message for settlement. The settlement listener service will receive the message and subsequently invoke the quote reconcile service for settlement. Settlement happens during which the running balances of the buyer and seller are updated. However, once the control comes back from the quote reconcile service, an error is simulated, as shown in Figure 13-27 by the X sign in the first pass for the settlement listener service.

A few things to be noted here:
  • The settlement done by the quote reconcile service is done within a TX_REQUIRES_NEW transaction context, so the error happens at the enclosing or the parent transaction (the transaction of the settlement listener service) will not have any effect on the transaction of the quote reconcile service, so the effect of the settlement is committed.

  • Even though the settlement listener service has read the message from ActiveMQ, the message listener method, which is in a TX_REQUIRED transaction context, will not commit the message read from ActiveMQ. Hence for ActiveMQ, the message delivery didn’t succeed, so it will try to deliver the message again.

During retry of message delivery (i.e., during the second pass of the message listener processing), you do not simulate any error, and this is indicated by the sign (✓) in both services in Figure 13-27. Hence the transactions by the settlement listener service and the quote reconcile service will both be successful and committed. However, there is a bigger problem: the quote will get settled once again, which will cause double addition and double deduction in the buyer’s and seller’s accounts, respectively. See Figure 13-28.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig28_HTML.jpg
Figure 13-28

Test Case 5 transaction semantics

Test the Message Consumed, Processing Failure Scenario

To test the Message Consumed, Processing Failure test case, take Postman again and create another new quote:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "MSFT", "sellerId" : 11, "buyerId" : 21, "amount" : 600, "test" : 6, "delay" : 2 }
Figure 13-29 shows the new quote for Test Case 6. It has a status of “New.” When settled, the quote amount, 600, should get added to the respective running balances of the buyer and seller, as shown in the figure.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig29_HTML.jpg
Figure 13-29

Test Case 6 console, initially

Let’s now look at the transaction semantics shown in Figure 13-30.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig30_HTML.jpg
Figure 13-30

Test Case 6 transaction semantics

Here again there are no errors simulated in any upstream processing, so the quote status will change from “New” to “Confirmed,” and since the new quote message delivery to ActiveMQ is also successful, the Quote Settlement microservice will receive the quote message for settlement. The settlement listener service will receive the message and subsequently invoke the quote reconcile service for settlement. Settlement happens during which it attempts to update the running balances of the buyer and seller. However, an error is simulated within the quote reconcile service transaction, as shown in Figure 13-30 by the X sign. Since the quote reconcile service transaction is configured with TX_REQUIRES_NEW semantics, the settlement will be rolled back. Since the message consumption succeeded, the net effect is that the quote remains unsettled forever in the downstream system! See Figure 13-31.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig31_HTML.jpg
Figure 13-31

Test Case 6 console, complete

Test the Message Redelivery Scenario

JMS offers no ordering guarantees regarding message delivery, so messages sent first can arrive after messages sent at a later time. This holds regardless of whether you use XA transactions or not.

To test the Message Redelivery test case, take Postman again and create another new quote:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "ORCL", "sellerId" : 11, "buyerId" : 21, "amount" : 700, "test" : 7, "delay" : 2 }
Figure 13-32 shows the new quote for Test Case 7. It has a status of “New.” When settled, the quote amount, 700, should get added to the respective running balances of the buyer and seller, as shown in the figure.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig32_HTML.jpg
Figure 13-32

Test Case 7 console, initially

Let’s now look at the transaction semantics shown in Figure 13-33.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig33_HTML.jpg
Figure 13-33

Test Case 7 transaction semantics

In this case too there are no errors simulated in any upstream processing, hence the quote status will change from “New” to “Confirmed” and since the new quote message delivery to ActiveMQ is also successful, the Quote Settlement microservice will receive the quote message for settlement. The settlement listener service will receive the message and subsequently invoke the quote reconcile service for settlement. Settlement happens, during which it attempts to update the running balances of the buyer and seller. However, during the first pass, an error is simulated within the quote reconcile service transaction, as shown in Figure 13-33 by the X sign. Further, once the control comes back from the quote reconcile service, another error is simulated within the settlement listener service, as shown in Figure 13-33 by the X sign in the first pass by the settlement listener service.

Even though the settlement listener service has read the message from ActiveMQ, the message listener method, which is in a TX_REQUIRED transaction context, will not commit the message read from ActiveMQ. Hence for ActiveMQ, the message delivery didn’t succeed, so it will try to deliver the message again. During the retry (i.e., during the second pass of the message listener processing), you do not simulate any error, and this is indicated by the ✓ sign in both the services. Hence the transactions by the settlement listener service and the quote reconcile service will both be successful and committed. And the effect is that the quote will get settled correctly in the second pass. See Figure 13-34.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig34_HTML.jpg
Figure 13-34

Test Case 7 console, complete

Common Messaging Pitfalls

In this section, I will demonstrate few common messaging pitfalls which can occur in distributed scenarios if you have not designed the system carefully.

Test the Message Received Out of Order Scenario

Testing for this scenario is slightly tricky since you need to coordinate the timing of each action as per the test fixtures. I will explain based on the configuration for my test fixtures, and they should work for you too if you have not done any changes to the example configurations.

You need to send a new quote for upstream processing but you want to delay the processing for a little. While this quote is waiting for the delay to be over, you also want to fire one more new quote for upstream processing, which will get processed nearly straight through without any delay. As a result, the quote created later has to reach the downstream system for further processing first, followed by the quote created initially. In this manner, you need to simulate messages arriving out of order at the downstream system.

Before you start firing requests for this test case, read the rest of this section completely once to understand the preparations you need to make mentally so as to fire new quotes as per instructions. Then come back to this point of the explanation again, read it again, and continue the actual test.

Figure 13-35 shows the transaction semantics for this test case.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig35_HTML.jpg
Figure 13-35

Test Case 8 transaction semantics

As shown in Figure 13-35, you don’t simulate any error scenarios; instead, you simulate only messages reaching out of order.

To test this test case, take Postman again and fire a new quote:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "QCOM", "sellerId" : 11, "buyerId" : 21, "amount" : 800, "test" : 8, "delay" : 90}
You should note that you are piggybacking a delay of 90 seconds along with the test request payload, as shown in Figure 13-36.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig36_HTML.jpg
Figure 13-36

Test Case 8 console, creating the first quote

Keep watching upstream microservice console (command screen). When the Quote Processor Task times out, it will fetch this new quote, changing the status from “New” to “Confirmed,” but before completing the end-to-end transaction, it will sleep for the 90 seconds delay you have provided; this is shown in Figure 13-37.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig37_HTML.jpg
Figure 13-37

Test Case 8 console, showing the delay

This 90 seconds has a significance because the next time-out of the Quote Processor Task will happen in the next 60 seconds, and during that time-out it shouldn’t fetch this quote because it is already in a processing state (the quote status is not “New”).

Here comes the trick. As soon as (within 10 to 20 seconds) the Quote Processor Task picks up the quote for processing and further goes to sleep for the delay of 90 seconds provided, you need to fire another new quote (ideally, you have another Postman session with the data ready), like so:
http://localhost:8080/api/quotes
METHOD: POST; BODY: Raw JSON
{ "symbol" : "GILD", "sellerId" : 11, "buyerId" : 21, "amount" : 900, "test" : 8, "delay" : 2 }
This second new quote is shown in Figure 13-38. That’s all, and the steps for test execution from your part are over.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig38_HTML.jpg
Figure 13-38

Test Case 8 console, creating a second quote

When the Quote Processor Task times out next, it will fetch this second new quote alone. This is because even though the first quote is still under process in the upstream system, the status of the quote has already been updated to “Confirmed,” as shown in Figure 13-38. This is shown by marking the transaction of the Auction Service as TX_REQUIRES_NEW, as is evident in Figure 13-35 (another hack for demonstration purposes).

For the second new quote, there is negligible process delay and it comes out of upstream processing, gets through ActiveMQ, and reaches downstream for further processing. You expect the first quote to still be at the process stage in the upstream system. This way, the message is out of order in the downstream system. This will be settled as shown in Figure 13-39.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig39_HTML.jpg
Figure 13-39

Test Case 8 console, second quote settled first

Keep watching the browser consoles. Quite soon, you will see the running balances of users in the downstream system change. This is the effect of the first quote, whose upstream processing has been delayed, so it reaches the downstream system out of order and finally gets settled, as shown in Figure 13-40.
../images/477630_1_En_13_Chapter/477630_1_En_13_Fig40_HTML.jpg
Figure 13-40

Test Case 8 console, first quote settled in second place

In the “Build and Test the Example’s Happy Flow" section earlier, you read that the latest quote creation time in the upstream system gets updated as the last quote time whenever the user account gets updated. Going by this logic, Quote ID 9 in Figure 13-40 is the latest quote created, so the creation timestamp of that quote should be the last quote time for the users. However, since Quote ID 8 and Quote ID 9 reached the downstream systems out of order, this sanity of data attribute rule went wrong!

Summary

Transactions are the bread and butter for the dynamics of any enterprise-class application. Local transactions are good; however, distributed transactions across partitioned domains are not among the good guys. When you shift from a monolith-based architecture to a microservices-based architecture, you still need transactions; however, you are better off with BASE transactions in place of ACID transactions for domains across partitions and keep ACID or XA compliant transactions within partitions or domains. In this chapter, you saw many error scenarios possible in distributed enterprise systems, and you also saw how such error scenarios could be avoided had you been using ACID transactions. When you shift from monolith- to microservices-based architecture, there is no reduction in the above error scenarios; instead, there is a magnitude of increase in such error scenarios since the degree of distribution of microservices-based architectures are increased manyfold. In the next chapter, you will see techniques you can use in architecting distributed systems to safeguard against such error scenarios, so continue happy reading!