14 BPMN Modeling Patterns

Process modelers repeatedly encounter similar situations and problems. Modeling patterns are proposals how to model such recurring cases. Instead of developing a specific solution every time, it is often possible to re-use existing and proven solutions. A common patterns catalog in an enterprise helps the modelers to represent the same things always in the same way. This makes the models better understandable.

It is therefore recommended to develop such collections of patterns and to continuously add new patterns that are detected in daily modeling. Depending on the application domain and the modeling purpose, there may be very different kinds of patterns.

In the following paragraphs, some general modeling patterns are presented. These patterns address problems that are relevant to many companies. Some of these patterns have been developed together with BPMN trainers from AXON IVY AG, Switzerland.

14.1 Four Eyes Principle

The four eyes principle is used for important documents, letters, proposals, etc. They must not be created and released by the same person. Instead, the item needs to be reviewed by someone else. This policy helps to ensure that company guidelines are followed, that errors are detected at an early stage, and that fraud is prevented.

The application of this principle in a process is easy to model (figure 175). After a document has been written by the author, it is reviewed by another person. If this person approves the content, the document is released. Otherwise, the author reworks the document, before it is reviewed again.

Figure 175: Four eyes principle with cancelation

Instead of a document, the created object can also be a proposal, a contract, a calculation, or something similar.

In this pattern, it is important that the two roles represented by the lanes actually need to be performed by different persons. In many other processes it is perfectly acceptable that one single person performs two or more roles, but here this must be prevented. Therefore, the lower lane explicitly has been labeled with “Other Person”. When this pattern is used in a specific process in which the lanes need to have other labels (such as “Developer” and “Quality Inspector”), an annotation can be used for documenting that these roles must be performed by different persons.

At a closer look, the model in figure 175 has a shortcoming. If the author and the reviewer do not eventually agree that the document can be released, the two tasks “Review Document” and “Rework Document” are repeatedly carried out in an endless loop. In practice, the participants will quit this loop after a while – although this is not explicitly defined in the model.

To model this more precisely, a third exit can be added to the splitting gateway. The sequence flow from this exit leads to a second end event that marks the unsuccessful outcome. This is shown in figure 176. Here, the other person can decide that the document is entirely rejected. It would also be possible to let the author decide whether he wants to withdraw the document. This could be modeled with another gateway after “Rework Document” with one exit leading to an end event.

This pattern can easily be extended. For example, the four eyes principle could be extended to a six eyes principle by adding another review by a third person. This second review can be carried out in parallel to the first review, as it is modeled in the pattern “Parallel Checks” (chapter 14.4). In another variation, a possible disagreement between the author and the reviewer could be solved by routing these cases to a second check that is carried out by a third person.

Figure 176: Four eyes principle

14.2 Decisions in Sub-Processes

Sub-processes often have several alternative end events which lead to different paths in the parent process. The pattern “Decision in Sub-Process” makes clear how the selected path is related to the result of the decision that has been made in the sub-process. This pattern has already been used in the discussion of sub-processes in chapter 7.1 (figure 107).

The sub-process can contain any arbitrary flow. The sub-process “Evaluate Proposal” in figure 177 is just an example. For the pattern it is important that each possible end result of the sub-process is represented by a separate end event. If the same results can be created in different parts of the sub-processes, they are combined into one end event. For example, in figure 177 the two “No” branches end in the common end event “Proposal rejected”. All end events are positioned near the right border of the subprocess.

In the parent process, the sub-process is followed by a splitting exclusive gateway. This gateway has one exit for each of the sub-process’s end events. The labels at the gateway exits indicate which path relates to which end event. In figure 177, the gateway is labeled with a question. Each answer to this question makes clear which result is being referred to. As an alternative, the question at the gateway can be omitted, and the outgoing branches can be labeled with the names of the end events (figure 178).

In both cases, it is also useful to sort the branches from top to bottom in the same order as the end events in the sub-process. Thus, the top branch refers to the top end event, etc.

Instead of a gateway, it is also possible to use conditional sequence flows. In this case, the sequence flows that are related to the sub process’s end events start directly at the sub-process border (figure 179). If the expanded view of the sub-process is shown in the diagram, the start of each sequence flow can be drawn directly next to an end event, so that it looks like the continuation of the respective sequence flow. Although the tokens in the sub-process and those in the parent process are different ones, there is a tight relation between them. The graphical layout reflects this relation.

Figure 177: For each of the sub process’s end events, there is one corresponding exit at the exclusive gateway

Figure 178: The gateway‘s exits are labelled with the names of the sub-process’s end events

14.3 Tasks with Multiple Actors

Typically, every task is performed by one actor. This is modeled by placing the task in the actor’s lane. Sometimes, however, a task is jointly performed by several actors. This is difficult to model in BPMN since an activity can only be placed in one lane. It is not allowed to draw an activity symbol in a way that it spans several lanes.

There are several possible solutions to this problem (cf. [Chinosi 2012]). Two of these approaches are discussed in this section.

Figure 179: Decision in a sub-process, followed by conditional sequence flows

Figure 180: A separate lane is used for tasks that are carried out by several actors together

In the sales planning process in figure 180, each of the first three tasks is performed by one different role. The fourth task, “Align Sales Plans” is to be carried out by all three roles jointly.

Since activities are not allowed to span several lanes, a fourth lane has been introduced that represents the entire sales team. Tasks in this lane are carried out by all members of the team together. The diagram does not graphically show that the sales team consists of the other three lanes’ actors. Therefore, an annotation has been added.

Figure 181: Use of an individual artifact “Additional Participant”

Figure 182: Different types of participation in a task

Another way of modeling tasks with multiple actors is proposed in [Freund and Schrepfer 2012]. It makes use of individual artifacts. In figure 181, the joint task has been placed in the sales manager’s lane. The two different sales representatives have been modeled using artifact of the type “Additional Participant”. They are connected with the task via associations. The artifact “Additional Participant” is not part of the BPMN standard, but an individual extension. As explained in chapter 13.1, BPMN explicitly allows for such additional artifacts.

The additional participants in figure 181 are labeled with the same names as the two lower lanes so that it becomes clear that they represent the same roles as these lanes. The three participants and their assignments to the task are now depicted in rather different ways. The sales manager’s assignment is represented by placing the task in his lane while the other participants are represented by small person icons. This raises the question for which of a task’s participants a lane should be used, and for which the newly introduced artifact. In most cases, the lane is used for that participant that plays a leading role in carrying out the task.

Very often there are different ways of participating in an activity. They can be classified according to RACI. The letters of this acronym stand for the types of involvement:

R Responsible Doing the actual work
A Accountable Delegating the task and approving its results
C Consulted Providing expertise and advice
I Informed Being kept up-to-date on the activity and its results

When using the “Additional Participant” artifact, it can be defined that the responsible role is represented by the lane, while the other roles are modeled as additional participants. Their associations to the task can be labeled with the types of involvement, as it is shown in figure 182.

For using this way of representing multiple actors, it is important to select a modeling tool that provides the possibility to define individual artifacts. There are also modeling tools on the market which already have pre-defined artifacts for additional participants.

14.4 Parallel Checks

When different persons need to check applications, requests, etc. according to different criteria, these checks can be carried out in parallel. In chapter 6.3 an example with parallel checks has been used for discussing the terminate end event.

There is also a simpler way of modeling parallel checks, without terminate end events. Since each check can have a positive or negative result, there can be many different combinations of positive and negative results. If all these possible combinations are considered, the models quickly become large and confusing. However, in most cases it is not important exactly which of the checks have a positive or a negative outcome. Instead, only two cases need to be considered: Either all checks have a positive result, or at least one check has a negative result.

Therefore, in figure 183 the checking activities are not directly followed by exclusive splits. Instead, the parallel paths are joined before there is an exclusive split that distinguishes whether all checks have produced a positive result, or not.

In this model, all parallel checks are always carried out entirely, even if one of the checks has already had a negative result, and the other checks would not be required anymore.

This can be avoided using a terminate end event. Figure 184 shows the solution from chapter 6.3 (figure 84) again. This time, however, the parallel checks have been placed in a sub-process. This is necessary for enabling the process to continue after the terminate end event has been reached. If the terminate end event were part of the top process level, it would terminate the entire process. Since it is in the sub-process, only this sub-process is terminated, and the parent process continues according to the pattern “Decision in Subprocess”, as described in chapter 14.2.

In this context, a remark may be useful, concerning the use of lanes in sub-processes. If the parallel checking tasks should be performed by different roles, it seems an obvious idea to place these sub-process tasks in different lanes. However, the entire sub-process itself is already situated exactly in one lane in the parent process. As a consequence, the contained tasks are also part of this lane and cannot be placed in other lanes. Especially when the sub-process is drawn in a separate diagram, this dependency is often overlooked, and wrongly further lanes are drawn.

Figure 183: Parallel checks

Figure 184: Parallel checks with terminate end event

One possibility is to use nested lanes, i.e. lanes that are partitioned into sub-lanes (see chapter 2.4). In the parent process, the sub-process is placed exactly in one lane. In the sub-process’s diagram, this lane is then divided into further lanes. Another possibility is to use a call activity (cf chapter 7.5) and to call a separate process instead of a normal sub-process. Since a called process is a completely independent process, it can contain any arbitrary lanes.

Figure 185: Depending on the end event that is reached, a different path is selected in a subsequent sub-process

14.5 Process Interfaces

Some process notations contain an element called “process interface”. In processes with more than one end events, process interfaces can be used to indicate for each end event, which process will be triggered after reaching that end event. Process interfaces can also be used in front of start events in order to model which preceding process triggers this start event.

BPMN does not provide such an element. If the successional processes are entirely independent, they can be modeled in separate pools. The corresponding start and end events of the two processes can be modeled as message events and connected with a message flow.

If the two processes are sub-processes of the same parent process, there are no specific BPMN elements to mark this kind of connection. Considering the discussions in chapter 6.2, there are different ways how to model this.

If different sub-processes should be triggered, based on the preceding sub-processes’ end events, then an exclusive splitting gateway is required in the parent process. This corresponds to the pattern “Decision in Sub-Process” (chapter 14.2).

Figure 186: Process interfaces made explicit with annotations

The reaching of the end event “Standard order entered” in figure 185 causes the subprocess “Process Order” to be started. If “Urgent order entered” is reached instead, the sub-process “Process Urgent Order” is triggered.

If the first steps of the sub-process “Process Order” should vary, depending on the preceding sub-process, this can be achieved with a splitting gateway at the beginning. The labels on the gateway exit indicate the related end events of the preceding subprocesses.

There is another variation that shows the relationships between the corresponding events more clearly. Instead of a splitting gateway, in figure 186 the sub-processes “Process Order” contains two different “none” start events. For each start event, the name of the preceding process is shown in an annotation. There are also annotations attached to the end events, stating the names of corresponding succeeding processes.

14.6 Synchronizing Parallel Paths

The communication between different processes is modeled with message flows. For example, a catching intermediate message event is used to define that a process waits for a message and can only continue when another process has sent a message of a specific type. But how to model the same thing within one process? Message flows are only allowed between pools. They must not start and end in the same pool.

In most cases it is possible to use sequence flow and parallel gateways for modeling such waiting situations. In figure 187, the task “Develop Detailed Concept” in the upper path should not be started before “Provide Funding” has been completed in the lower path. This can easily be achieved with parallel gateways. In figure 188, the token from the lower path is duplicated at the parallel gateway. In the upper path, both the token from the upper path and one of the lower path’s tokens need to arrive at the joining parallel gateway, before “Develop Detailed Concept” can be started.

Figure 187: Parallel paths which are to be synchronized

Figure 188: Synchronisation with parallel gateways

If there are many such synchronizations in a large diagram, it may become rather complex and difficult to understand.

It becomes really difficult if a communication is required between sub-processes because sequence flows must not cross the boundaries of sub-processes. Therefore, in figure 189, no sequence flow can be modeled between the lower and the upper path. Neither it is possible to use link events, as they have been discussed in chapter 6.4, since they are only layout elements that allow for interrupting the line that represents a sequence flow. It is not permitted to position a throwing link event in one subprocess, and the related catching link event in the other subprocess, since this would also establish a sequence flow crossing sub-process borders.

Sometimes it is proposed to use signal events for this problem. In the lower subprocess the task “Provide Funding” could be followed by a throwing intermediate signal event “Funding provided”. In the upper sub-process, there could be a catching intermediate signal event after the task “Develop Basic Concept”. The upper path then waits at this catching event for the arrival of the signal “Funding provided”, before “Develop Detailed Concept” can be started.

Unfortunately, there is a problem with this solution. Other than messages which are always addressed to a specific receiver, signals are broadcasted everywhere. When the signal “Funding provided” is sent, it is not only received by the same process instance, but by all instances of this process. This means that all other waiting process instances will also start developing a detailed concept, although their funding is not provided yet.

Figure 189: Synchronization using sequence flows is not possible for parallel flows from different sub-processes

Figure 190: Synchronization with a conditional event

The proposed solution is shown in figure 190. The upper sub-process waits at the conditional event “Funding provided”. This condition will be fulfilled when the task “Provide Funding” is completed in the lower sub-process. This has been visualized with a “none” intermediate event. It has no effect on the process flow, but it helps the reader to identify the connection between the two sub-processes.

If this pattern should be implemented as software, the task “Provide funding” can set the value of a boolean process variable to “true”, while in the condition of the waiting event, the value of this variable is evaluated.

14.7 Requests with Different Types of Replies

The receiver of a message can send different types of answers in a collaboration. For example, a proposal can be accepted or rejected. Based on the received answer, different paths may be selected in the requestor’s process.

There are two ways to model this situation. In figure 191, there are two message types that can be received after sending a request message, either an acceptance message or a rejection message. The two catching message events are preceded by an event-based gateway so that the upper path is selected when an acceptance message arrives while the lower path is selected when a rejection message arrives. This kind of modeling visualizes rather clearly which different types of answers can be sent.

In figure 192, the same case has been modeled in a different way. Here, the different possible answer types are not represented by different message types. Instead, there is only one message type “Reply”. Whether the reply is an acceptance or a rejection is part of the message content. The catching message event is followed by a data-based exclusive gateway which routes the arriving token according to the message content either to the upper or the lower path.

Figure 191: Different message types for the reply to a request

This kind of modeling represents the real process if the message exchange is not automated. Usually, the reply to a request arrives in an e-mail or a letter. Only when the email or the letter is opened, and the content is read, it is known whether the request has been accepted or rejected. The model is also easier to understand for people who are not BPMN experts because they do not to know the semantics of the event-based gateway.

Figure 192: The type of the answer is determined from the message content

14.8 Processing Cancelations

How to cancel an order has already been described in connection with the introduction of attached intermediate events in chapter 8.1 and with the discussion of compensations in chapter 9.1. Therefore, the basic structure of the pattern in figure 193 is already known: The attached intermediate message event makes sure that after the arrival of a cancelation message, the entire sub-process will be terminated, and a token will flow to the exception sequence flow.

Figure 193: Cancelation

In chapter 9.1 it has been explained, how to use compensation activities for modeling how to reverse the effects of completed activities. There are not any compensation activities in figure 193. Instead, a general activity “Compensate Completed Activities” has been used. This is based on the assumption that the responsible performer can decide for himself what exactly needs to be done.

In many cases, it does not make sense to try to predict all possible situations in advance. This is especially true for exceptions and special cases. Therefore, these exceptions and special cases are forwarded to a qualified person who can analyze the situation and decide about the necessary steps.

However, if cancelations occur rather frequently in the above example, and the process control is fully automated, then it is useful to model the compensations exactly. It is then possible to process cancelations in a standardized way and just as efficiently as the normal flow.

14.9 Deadline Monitoring

The monitoring of deadlines is a typical use case for event-based gateways. The process in figure 194 sends a request to a partner. If the partner replies within the defined timespan, the reply is processed, and the process is finished. If the deadline is reached and no reply has been received, the event “Deadline reached” is triggered as the first of the two intermediate events. If the number of inquiries is less than a defined maximum number, an inquiry is sent, and the process waits again for a reply.

Figure 194: Deadline monitoring

If the maximum number of inquiries has been reached, the process ends unsuccessfully. In this case, the company may look for another partner. Such activities are not part of this pattern anymore. Without defining a maximum number of inquiries, it would be possible that new inquiries would be sent endlessly if the partner does not answer.

This pattern can be combined with the pattern “Request with Different Types of Replies” which has been described in chapter 14.7. If the variant with different message types is used, additional catching message events need to be added after the event-based gateway. For the other variant, with just one message type, a data-based exclusive gateway can be inserted after the message event “Reply received”. The exit of this splitting gateway is chosen according to the content of the reply message.

14.10 Dunning Procedure

The principle of the pattern “Deadline Monitoring” can also be used for modeling a multi-level dunning procedure. Instead of separately modeling the flows for the first reminder, the second reminder, etc., a loop is used in figure 195. Every time the loop is repeated, the current dunning level and the deadline for this level are determined, before the next reminder is sent.

Figure 195: Dunning procedure with multiple levels

The process then waits at the event-based gateway for the payment. If the deadline is reached before a payment has been received, the next step depends on whether the maximum number of reminders has already been sent. If this is not the case, the next dunning level is reached, and another reminder is sent. If the maximum number of reminders is reached, the dunning procedure is finished without success. This usually triggers further actions, such as taking legal actions.

The model does neither show the dunning levels nor the actual timespan for the deadline. This information is transmitted to the receiver of the reminder in the content of the reminder message. The advantage of this pattern is that it represents a dunning procedure with any number of levels in a rather concise way.

14.11 Call for Proposals

A call for proposals is sent to multiple suppliers. Until a defined deadline, each supplier can submit a proposal. Then the best proposal is selected.

The model in figure 196 again is similar to the deadline monitoring pattern. In contrast to that pattern, here the call for proposals is not sent to one, but to several suppliers.

This is done in a multi-instance activity. The supplier pool is marked as multi-instance participant since it represents the group of all involved suppliers.

After sending the call for proposals, the process waits at the event-based gateway for the arrival of proposals. Each time a proposal arrives, it is registered, and the waiting for proposals continues. When the time event “Deadline for proposals reached” occurs, the best proposal is selected, and the process is finished.

Proposals that arrive after the submitting deadline are not considered anymore.

Figure 196: Call for proposals

The pattern can be varied. For example, it could be combined with the pattern “Deadline Monitoring”, so that after the deadline has been reached, an inquiry is sent to those suppliers who have not sent a proposal yet.