Chapter 9

Improving Flow

Flow is such an overloaded term. If we exclude all of the mathematical, scientific, media, software product, software language, and music references and just focus on complex product development, there are still multiple usages of the term. Some use the term “flow” as shorthand for workflow. I mentioned in the previous chapter that there can be flow at the individual (or pair or mob) level, which manifests as people being creative, productive and “in the zone”—even losing track of time while they work. The term ”flow,” as I will focus on in this chapter, describes how work flows through the team’s process.

I really like fellow Professional Scrum Trainer Daniel Vacanti’s definition of flow. He defines flow as the movement of customer value throughout the product development system. This flow can be visualized, can foster collaboration, and allows for the team’s optimal process to emerge. This flow is also something that can be measured and—more importantly—improved.

In this chapter I will describe how a Scrum Team can visualize, manage, and improve their flow. I will also show how these teams can use flow metrics in their Sprint events—essentially showing how Kanban can be an effective, complementary practice of a Scrum Team. In fact, if you were to overlay Scrum and Kanban in a Venn diagram and, more specifically where the subset of Professional Scrum and subset of Professional Kanban overlay—that’s where Professional Scrum with Kanban exists. You can see this in Figure 9-1.

A drawing of a Venn diagram. A large circle titled “Scrum” on the left contains a smaller circle titled “Professional Scrum”. A large circle titled “Kanban” on the right contains a smaller circle titled “Professional Kanban”. Where the two smaller circles overlap is what is called Professional Scrum with Kanban.

FIGURE 9-1 Professional Scrum with Kanban sits at the intersection of Professional Scrum and Professional Kanban.

Note Professional Kanban would be Kanban practiced according to the Kanban Guide with a focus on flow of value by using the Kanban practices and lean metrics to visualize the work, limit work in progress (WIP), manage the flow, and continuously improve through inspection and adaptation of the definition of how work flows. In other words, it’s when the team uses the inspections from Kanban to make adaptations in order to maximize the flow of value through the system. You can learn more by visiting https://prokanban.org.

Note In this chapter, I am using the customized Professional Scrum process, not the out-of-the-box Scrum process. Please refer to Chapter 3, “Azure Boards,” for information on this custom process and how to create it for yourself.

Visualizing Flow

Professional Scrum Developers want to make their work visible. They know that humans process visuals faster than text. They also realize that a visualization that is made transparent to the entire team creates a shared understanding of what work has been done and, more importantly, what work is left to do. In Chapter 6, “The Sprint,” I introduced the Taskboard. Initially, the tasks on the Taskboard are in the To Do column (state). As the Developers work on a PBI, the related tasks will transition to In Progress and, eventually, to Done. Keep this in mind as I continue talking about visualizing flow.

In the previous chapter, I discussed some collaborative practices that can improve a team’s flow—specifically swarming and mobbing. Both of these practices improve flow as they enable the Developers to focus on a single PBI, without being distracted by other PBIs. This results in a PBI being Done earlier.

It is difficult to visualize flow on the Taskboard. If you omit the outermost To Do and Done columns, you are left with only one column—In Progress—to represent all the different states of work. Only if the Developers were diligent about naming the tasks appropriately would the team be able to determine where they were in the workflow.

For example, let’s assume that a typical PBI’s workflow looks something like this: designing, coding, testing, releasing. Yes, it’s a simplistic, sequential, almost waterfallian workflow, but work with me here. For the Developers to visualize this workflow, they would need to create tasks that essentially map to each of those workflow steps. Generally, this is what a Developers do, although they might have many tasks per workflow step. You can see in Figure 9-2 that there are two design tasks, five coding tasks, one testing task, and two release tasks in our example. I’ve added tags to help make this apparent.

A screenshot of a Taskboard showing 10 tasks associated with the “Show customers count” PBI. Tags on each Task work item show that there are 2 Design tasks, 5 Coding tasks, 1 Testing task, and 2 Release tasks.

FIGURE 9-2 A PBI may have multiple tasks per workflow step.

If a Developer were to hoard or hog this PBI, they might execute these tasks sequentially, and it would look something like this: design, design, code, code, code, code, code, test, release, and release. As I mentioned several times in the last chapter, Professional Scrum Developers don’t execute their work sequentially if at all possible. They will instead find ways to work some tasks in parallel.

As you can see, the Taskboard itself isn’t very helpful in modeling a team’s workflow. It’s intended that tasks will do that. This makes the Taskboard very versatile for representing a variety of disparate workflows—from coding a new feature, to fixing a bug, to upgrading infrastructure, or even to writing documentation. For Developers who would rather have the board represent their workflow, the Kanban board may be a better choice for visualizing flow.

The Kanban Board

I covered the Kanban board in Chapter 5, “The Product Backlog,” although I did so in a context of a Product Owner using it as a way to visualize a PBI’s progression to becoming “ready”—not as Developers using it for development. The latter is the more popular use of a Kanban board—to visualize and manage development workflow. A Kanban board turns a linear backlog into an interactive, two-dimensional board, providing a visual flow of work. As work progresses from idea to completion, Developers update the items on the board. The items in our case are PBIs. This visualization makes transparent the current item’s progress—or lack of progress.

Note Don’t confuse using the Kanban board with practicing Kanban. Scrum Teams can use the Kanban board to visualize their work without practicing the other aspects of Kanban. That said, the Professional Scrum community views Kanban as a valid complementary practice. For more information on how to practice Kanban within Scrum, download and read www.scrum.org/resources/kanban-guide-scrum-teams.

In Azure Boards, Task work items are not visualized on the Kanban board. Only PBI work items (and Bug work items if you have them enabled) are visible on the Kanban board. The board’s columns initially map to a PBI’s workflow states. This means that, for our custom Professional Scrum process, the board will initially have these columns: New, Ready, Forecasted, and Done. These default states do not make for an interesting or useful Kanban board.

To be useful, Developers would need to expand the default Forecasted column into additional columns, such as Design, Coding, Testing, and Release—all of which would map back to the underlying Forecasted state of the PBI work item type. Figure 9-3 shows an example.

A screenshot of the Kanban board. The default Forecasted column has been replaced with Design, Coding, Testing, and Release columns. To the left of the Design column is the Ready column. The outside columns, New on the left and Done on the right, have been collapsed. There are work items under the Ready, Design, Coding, and Testing columns.

FIGURE 9-3 You can add custom columns to the Kanban board.

Tip When creating a Kanban board, be mindful of having too many columns. Each column is often an opportunity for more work in progress (WIP). Also, make sure that columns are not tied to people. In other words, seeing a column named Skyler is a smell. Instead, label that column based on what Skyler does on the team—and then make sure there are other “Skylers” to help out. There also should be a clear “work has started” column and a clear “work has finished” column. For Scrum teams, the finished column is typically titled Done. Also, try to avoid columns for blocked items.

When the Developers begin work on a PBI, someone will pull it into the Design column. When Design is completed, another Developer will pull it into the Coding column, and so on. When the PBI is done—according to the Definition of Done—it is moved into the Done column on the far right. Where the Taskboard requires you to see what task is in progress to learn the status/progress of a PBI, you can simply look at the column the PBI is in on a Kanban board to get the same information.

Because each column corresponds to a stage or state of work, you can quickly see the number of items in progress at each state. However, a lag will often occur between when work gets moved into a column and when work actually starts. As a way to address that lag and reveal the actual state of WIP, you can split columns into Doing and Done, as I’ve done with the Design column in Figure 9-4.

A screenshot of the Kanban board, focusing on the Design column. Two new sub-columns exist named “Doing” and “Done”. Under the Doing sub-column is a work item titled “Twitter feed”. Under the Done sub-column is a work item titled “Customer email addresses”

FIGURE 9-4 Columns can be split into Doing and Done.

The Kanban board is nice for modeling the workflow of most of the work (such as coding a new feature), but not all PBIs will make use of all the columns. For example, a bug fix might skip the Design column, and who knows how an infrastructure upgrade PBI would traverse the board. Asymmetrical work like this can be challenging, but before you give up on the Kanban board and return to the Taskboard, please continue reading this section on managing flow and flow metrics, which makes additional arguments for using the Kanban board. For more information on configuring and using the Kanban board, visit https://aka.ms/kanban-quickstart.

Managing Flow

Professional Scrum Developers are quite interested in managing their flow. They regularly review and improve the way in which they work in order to deliver Done, working product quickly and efficiently. From a flow perspective, this means removing impediments and making improvements so that value moves in a continuous and smooth way. Sounds great. I’m sure that you would like this, too, but how would you get started? I recommend rereading Chapter 8, “Effective Collaboration,” because it covers many collaborative practices that will help you achieve this goal.

Managing flow means always making these kinds of inspections:

  •    Is work flowing smoothly through the system?

  •    Are there any impediments that block flow?

  •    Is a PBI currently blocked?

  •    Do policies (such as WIP limits) need to be established or changed?

  •    Are policies being followed?

  •    Are the Developers starting a new PBI before finishing the previous one?

  •    Can the Developers work in a different way in order to finish a PBI sooner?

  •    How can the Developers collect and use flow metrics appropriately?

  •    Are items aging as expected?

  •    What experiments can the Developers try to improve their process and flow?

Note Aging refers to the amount of time that a PBI remains in progress—usually measured in days—and can be scoped to a particular column (such as Coding). By monitoring aging, the Developers can analyze the flow of their work through the board. By tracking a PBI’s age, Developers can identify impediments, such as dependencies, before they turn into a larger risk of missing the forecast or the Sprint Goal. The aging per column will vary based on the complexity of the PBI. In other words, some PBIs may spend more than a day in design, whereas others may age longer in testing. By inspecting a PBI’s age, Developers can determine whether improvement experiments were successful.

Many of the aforementioned inspections are things that Professional Scrum Developers should already be making during their Sprint Retrospectives. There shouldn’t be anything new in that list—with the exception of a couple of new concepts: policies/WIP limits and flow metrics. The rest of this section will be spent covering those concepts.

Smell It’s a smell when I see a team create a Kanban board that represents their current (dysfunctional) way of working and then leave it—Sprint after Sprint. Sure, for a large, waterfallian organization, the Kanban board on day 1 of Sprint 1 might have 15 columns—most of which are for external handoffs, such as testing or approval—but hopefully that board would start to collapse over time. For a truly cross-functional, self-managing team full of T-shaped individuals, I’d hope that the Kanban board would eventually collapse to a single Developing (also known as In Progress) column.

The Kanban board visualizes the Developers’ current way of doing things whether it is sequential (waterfall) or more collaborative (flow). The fact that a Kanban board has states that seem like waterfall states isn’t the point. If the batch size is small, then it isn’t waterfall. What’s more, if it’s possible to move to a more concurrent collaborative workflow, that’s great. Kanban’s point is to make a team’s current workflow visual so that they can work to simplify it over time.

Note For Scrum Teams practicing Kanban, the Scrum Master should add flow coaching to their repertoire of activities. This would include making the team reflect and act, such as following the policies it has created, creating new ones when needed, discussing and acting on exceptions (issues and opportunities), and experimenting to find creative solutions. Much like the “flow manager” on some Kanban teams, the Scrum Master acting as a flow coach should inspire and challenge.

Limiting WIP

Work in progress (WIP) limits constrain the amount of work the Developers undertake at each work state and, thus, in the entire system. Much like a highway at rush hour, you simply do not want to have more traffic (WIP) than the system (Developers) can handle. The goal is for the Developers to focus on completing items before starting new ones. WIP limits encourage this.

Note You might know WIP as work in process, not work in progress. Technically, both are correct. I prefer progress because it describes movement toward a goal, which is what a Professional Scrum Team strives to do in their daily work. Progress also implies improvement (through working and learning). Contrast this to process, which simply conjures a series of actions. Fellow Professional Scrum Trainer Peter Götz dives into this deeper at www.scrum.org/resources/blog/wip-work-inwhat.

WIP limits should be based on the Developers’ capacity and ability to do the work—not on the number of “specialists” for that column. In Azure Boards, this is done by setting a WIP limit on each column. For example, if the Developers set a WIP limit of 1 on the Testing column, this policy will require that the Developers can be testing no more than one item a time. As a corollary, upstream work should not proceed if it will build up inventory. This means that Developers who are coding should shift to helping with testing, rather than accrue more code to be tested.

As you can see in Figure 9-5, WIP limits are just a soft constraint on the number of items allowed within the column. If WIP goes above the WIP limit, the number will show in red, providing a visual cue to the Developers. Nothing actually prevents a Developer from pulling more items into a column and exceeding the WIP limit. When WIP drops below the defined limit, that is the signal to start new work.

A screenshot of the Kanban board, focusing on the Design column. The column has a WIP limit of two, but contains three work items so the visualization shows a large red 3 indicating the limit has been exceeded.

FIGURE 9-5 Column WIP limits provide a visual indication that there is too much WIP.

Although setting WIP limits is easy, adhering to them takes a commitment by all Developers. Teams new to the concept may find WIP limits counterintuitive and uncomfortable. However, this single practice has helped Developers identify bottlenecks, improve their process, and increase the quality of their products. Limiting WIP helps flow and improves the Developers’ self-management, focus, commitment, and collaboration.

Note Once the Developers have placed WIP limits on their Kanban board, they have created a pull system. In a pull system, a Developer only starts (pulls) work when there is capacity to do so. For example, only when the Coding column has capacity will someone pull a PBI from the Design-Done column to Coding. Contrast this to a push system, where the PBI that is done with Design is pushed to Coding, possibly creating an inventory backlog. In a pull system, a PBI is pulled, “owned,” and worked on in a just-in-time manner as determined by the team’s capacity. This aligns with the guidance I’ve given previously for using the Taskboard and collaboration. The benefit of implementing a pull system is that it is easy to define what it means for work to have “started.” Without establishing or using WIP limits, the Kanban board is only a visualization.

WIP limits should be discussed at each Sprint Retrospective, along with any related challenges the Developers might have. Limits should be adjusted accordingly. Once balanced, WIP limits will ensure that the Developers will keep a productive pace of work without exceeding their work capacity. WIP limits need to be set low enough to have an impact. If the WIP limits don’t change behavior, they’re not limits.

Much has been written about WIP limits and the intricacies of queuing theory and the underlying mathematics. I recommend starting with Daniel Vacanti’s work at https://actionableagile.com.

Managing WIP

Limiting WIP is necessary to achieve flow, but it alone is not sufficient. To establish flow, the Developers must actively manage their work items in progress. This can take several forms during the Sprint, and there are many complementary practices that they can use to manage WIP.

Here are some examples of how Developers should manage their WIP during the Sprint:

  •    Making sure that PBIs are only pulled into the workflow at about the same rate that they leave the workflow

  •    Ensuring PBIs aren’t left to age unnecessarily

  •    Swarming, pairing, and mobbing to move aging work items

  •    Responding quickly to blocked PBIs

  •    Responding quickly to PBIs that are exceeding the expected Cycle Time level (service level expectation, or SLE)

As I’ve said in previous chapters, it’s important that the Developers do not start new work before they finish existing work. In other words, the Developers should complete all tasks for PBI #1 (according to the Definition of Done) before starting the first task on PBI #2. By starting work at about the same rate as finishing work, the Developers will be effectively managing their WIP.

Regardless of the way the Sprint plan is formulated (tasks, tests, or Kanban board columns), it’s important for the Developers to monitor progress accordingly. This is more than just seeing what they can start on next. It’s about ensuring that the current WIP is progressing normally and not aging unnecessarily. If a PBI gets blocked, the Developers should adopt an “all hands on deck” mindset to get it unblocked. As long as it’s blocked—for whatever reason—it will age unnecessarily, Cycle Time will increase, and forecasts may be missed. Some blockages may require help from the Scrum Master.

The same reasoning applies to PBIs that are queued, ready to be pulled into a new column. Referring back to Figure 9-5, let’s assume that PBI #8472 has been Design-Done for three days whereas PBI #8473 has been Design-Done for only one day. All things being equal, the Developers should pull the older PBI into Coding so that it does not age unnecessarily. This is known as a pull policy. This particular pull policy discourages PBIs from unnaturally aging and encourages productivity. There may be times, however, that Developers may want to pull a newer item for valid reasons, such as to manage dependencies. Changes to pull policies should be discussed and agreed on by the Developers.

A service level expectation (SLE) is a forecast based on empirical data on how long it should take any given PBI to flow from start to finish within the workflow (such as Design to Done). The Scrum Team uses its SLE as a gauge to find active flow issues and to inspect and adapt in cases of PBIs falling below those expectations. The SLE itself has two parts: a period of elapsed time (days) and a probability associated with that period (for example, 85% of work items should be finished in four days or less). The SLE should be based on the Developer’s historical Cycle Time. Once calculated, the Scrum Team should make its SLE transparent. This is especially important when the Scrum Team moves to continuous delivery.

The ActionableAgile Analytics extension uses a team’s historical data in Azure Boards to calculate its Cycle Time SLE. You can see this plainly displayed on the Dashboard in Figure 9-6. To see additional probabilities (50%, 70%, 95%, etc.), you can view the Cycle Time Scatterplot analytic. For more information on the ActionableAgile Analytics extension, visit https://aka.ms/actionable-agile-analytics.

A screenshot of the ActionableAgile Analytics extension dashboard. It shows a Cycle Time of 4 days or less with 85% of items and 2 items currently in progress.

FIGURE 9-6 The ActionableAgile Analytics extension calculates many flow metrics.

Note You may not have heard of service level expectation (SLE) before. The more popular term in our industry is service level agreement (SLA). I prefer SLE because “expectation” suggests a forecast based on evidence as well as a culture of learning and improving. “Agreement” comes from manufacturing, where tasks are generally repeatable with a high degree of accuracy and thus infers a commitment or a promise. Although this is more popular with customers and organizations, it doesn’t reflect the reality of operating in the complex space of knowledge work. This subtle renaming is akin to why Sprint commitment was renamed to forecast almost a decade ago.

If a Scrum Team doesn’t have much—or any—historical Cycle Time data, the Scrum Team should make its best guess at an SLE. Over time, as there is more historical data, they can inspect their Cycle Times—using the appropriate analytics—and adapt their SLE. Surprisingly enough, it doesn’t take much history to calculate a fairly accurate Cycle Time.

Inspecting and Adapting Workflow

The Developers use the Scrum events to inspect and adapt their definition of workflow, thereby helping to improve empiricism and optimize the value that they deliver. Although improvement conversations are typically relegated to the Sprint Retrospective, workflow inspections can occur at any time during the Sprint. Workflow inspections—that the Developers might make—generally break down into two categories: visualization policies and working policies.

Visualization policies include the adding, merging, splitting, or removing of states (columns) to represent a refined workflow. This can be done as the Developers learn new ways of working and bring more transparency to areas that they want to inspect and adapt.

Working policies include making changes to address specific impediments, such as adjusting WIP limits, adjusting the SLE, or even implementing pull policies. Changes to the Definition of Done might also be considered a change to a working policy.

When the team is setting up the Kanban board, the visualization of the Developers’ workflow should include explicit policies about how work flows through each state. This may include one or more items from the Definition of Done. For example, if the Definition of Done includes 12 items, and 4 of those are testing related, then those 4 items might exist in the Test column’s Definition of Done. A column may also have additional Done criteria that are not in the overarching Definition of Done. Conversely, the Definition of Done may have items that are not listed in any specific column’s done definition. Azure Boards supports a Done definition at each column, as you can see in Figure 9-7.

A screenshot of the Columns page within Kanban board Settings. The Testing column is selected and its definition of done has four items listed using Markdown: “Acceptance tests created”, “Acceptance tests added to test plan”, “Automated acceptance tests run in pipeline”, and “All acceptance tests pass”.

FIGURE 9-7 Each WIP column on a Kanban board can have its own definition of done.

Whether or not the Developers establish definitions of done per column, they should make sure that everyone has a clear understanding of what’s required for work to flow through each state. This will help prevent missed communication, additional meetings, and rework. The Kanban board’s configuration should prompt the right conversations at the right time and proactively suggest opportunities for improvement.

As with everything else the Developers do, they should use the Kanban board for a period of time, and then inspect and adapt accordingly. During Sprint Retrospective, as well as at other times, they can evaluate the columns, WIP limits, policies, and definitions of done.

Flow Metrics

The metrics of flow are very different from traditional metrics. Instead of focusing on things like story points and velocity, teams can use new metrics that are more transparent and actionable. By transparent, I mean that the metrics provide a high degree of visibility into not just the progress of the Developers’ work, but also how they work. By actionable, I mean that the metrics themselves will point to some specific improvements needed to improve the Developers’ performance. Traditional agile metrics and analytics give no visibility or any suggestion of what to do when things go wrong.

To be truly agile, teams and organizations should adopt the language of their stakeholders, or at least use terminology that is not technical or confusing. Unfortunately, traditional agile metrics are sometimes confusing to stakeholders. Story points, velocity, burndowns, and so forth are all weird terms that might need to be explained. Also, the conversion of story points to days, which I discourage, will still be done (wrongly) in the heads of those stakeholders. Enter flow metrics, where stakeholders will quickly understand basic concepts of elapsed time in days and item counts.

Here are the four flow metrics, within a defined workflow, that a Scrum Team might find interesting:

  •    Work in Progress (WIP)   The number of PBIs started but not yet finished. Developers can use this leading indicator metric to provide transparency about their progress in order to reduce their WIP and improve their flow.

  •    Cycle Time   The amount of elapsed time between when a PBI starts and when it is Done. In other words, the amount of elapsed time when the PBI enters the first WIP column to when it enters the departure (for example, Done) column. Cycle Time is a lagging indicator and best answers the question “How long will it take this PBI to complete?”

  •    Work Item Age   The amount of time between when a work item started and the current time. This leading indicator applies only to items that are still in progress. For Developers, PBIs in the Product Backlog are not considered to be aging.

  •    Throughput   The number of work items finished per unit of time, such as a Sprint. Throughput is a lagging indicator and best answers the question “How many PBIs will be done by the end of the Sprint or in the next release?”

Tip Although these metrics are less confusing than traditional agile metrics, be careful sharing them with stakeholders. They are primarily technical metrics and may not be helpful when shared beyond the Scrum Team. Worse, stakeholders may misinterpret or infer something that is not true (a commitment, a plan, a budget, etc.). It’s better if the Product Owner or Scrum Master reveals the appropriate metrics (such as service level expectation) to stakeholders as necessary.

I’ve already discussed WIP in this chapter, so the WIP metric should be pretty straightforward. It’s simply the number of PBIs that the Developers have started working on but have not yet finished. For Developers practicing strict swarming or mobbing, WIP should be equal to 1. For Developers just getting started with Scrum and not yet practicing any of the collaborative methods I mentioned in Chapter 8, WIP could be as high as the number of Developers or more!

Cycle Time is a telling metric, because it relates to how quickly the Developers are able to finish a PBI once they start working on it. For example, if the Developers pull a PBI into the Design column on Monday, then into Coding that afternoon, then into Testing on Wednesday, and finally into Release on Thursday morning, their Cycle Time for that PBI is four days. When calculating Cycle Time, I recommend rounding up. This way, you won’t have any zero-day Cycle Times.

Note Some practitioners consider Cycle Time to have started with the first commit to the repository and to have ended with deployment to production. On the surface this sounds acceptable, and it could even lend itself to automated calculation via Azure Repos and Azure Pipelines. I see two problems with this approach, however. The first is that work may begin prior to the first commit—and potentially way prior. I’m not just talking about the time it takes to write the code or write the test that is being committed. I’m talking about all the planning conversations, whiteboarding, architecture design, Azure Boards tasking, Azure Test Plans setup, and other non-Git-committable work. Your Cycle Time might miss a whole day (or more) if you begin measuring Cycle Time at the first commit. The second problem is that this approach would require the Developers to work in a new branch per PBI so that any analytics would be sterile—applying only to the PBI in question. Branching per PBI, as you’ll recall from Chapter 8, discourages collaboration and increases risk.

By studying the Work Item Age metric, Developers can perform flow analysis. The Developers can visualize how PBIs are progressing toward the Done column of the board, including how much time PBIs are spending in the Doing versus Done sub-columns of each column. This analysis can help the Developers understand where their process is slowing down or stopping altogether—in order to determine why and how to improve it. As improvement experiments are attempted, the comparison of current Work Item Age with historic ones can be used to determine efficacy.

Throughput is a measure of how fast items depart the process (for example, reach the Done column). The unit of time for which Throughput is measured is up to the team but is typically per Sprint (such as two weeks) for Scrum Teams. Throughput is different than velocity. Velocity is a measurement of Done story points (or another unit of measure) per Sprint, which is different than Throughput’s count of Done PBIs.

The Throughput metric answers the important question of “How many PBIs are we expecting to complete this Sprint?” or “How many PBIs might we have done for the October release?” At some point all Product Owners will get questions like this, and they should be ready to provide an answer. Tracking Throughput is one way to be prepared to provide an answer—an answer based on empirical data.

Throughput directly relates to Cycle Time. Longer Cycle Times lead to a decrease in Throughput. As it takes longer to finish an item, the Developers won’t be able to finish as many items in a period of time. In other words, a decrease in Throughput means that less work is getting done, and the less work gets done, the less value is being delivered.

Note You may be wondering why I didn’t list Lead Time as a flow metric. If you have been exposed to Lean or Kanban concepts, you have probably heard of that term. It is closely related to Cycle Time. Whereas Cycle Time is the amount of elapsed time between when work starts on a PBI to when it’s released, lead time is the elapsed time of when the PBI was first requested by a stakeholder (or was added to the backlog) to when it was released. For example, if it took 4 days to develop and deliver it (Cycle Time), but it sat in the Product Backlog for 30 days, lead time would be 34 days. Both terms are dependent on perspective. In other words, lead time to the Developers would be considered Cycle Time to the Product Owner. This is not to suggest that tracking the time it takes for an idea to get into a customer’s hands isn’t important. It most definitely is. The Product Owner should be concerned if these elapsed times increase. I’m only saying that the Product Owner can use the term Cycle Time and define the context differently from how the Developers do.

Calculating Flow Metrics

The flow metrics we’ve discussed are fairly inexpensive to gather. In fact, WIP, Cycle Time, and Throughput take very little time to collect and can even be tracked manually using a simple spreadsheet, which is sometimes a team’s only option. Once the data is in a spreadsheet—or in a more sophisticated system like Azure DevOps—there are a number of analytics (charts and reports) that can provide these metrics.

Smell It’s a smell when the Developers assumes that the Scrum Master will collect all the metrics. If the Scrum Master is knowledgeable, then they may teach the Developers or the Product Owner how to collect, analyze, and report the metrics, but it’s not the Scrum Master’s job to collect metrics. The good news is that Azure DevOps and ActionableAgile Analytics do the heavy lifting anyway.

The four primary analytics used to calculate these flow metrics are:

  •    Cycle Time scatterplot   A representation of how long it takes to complete PBIs. The x-axis represents the timeline and the y-axis represents Cycle Times in days. Dots represent the intersection of a date and number of days (Cycle Time) that it took a specific PBI to complete (move to the Done column, for example).

  •    Throughput run chart   A representation of how many PBIs are completed over a period of time. The x-axis represents the timeline and the y-axis represents the number of PBIs completed on that date. Throughput histograms can be used to monitor team performance, identify performance trends, and forecast future delivery.

  •    Cumulative Flow Diagram (CFD)   Shows the count of PBIs in each column for a period of time. From this chart you can gain an idea of the amount of WIP, average Cycle Time, and Throughput. The x-axis represents the timeline and the y-axis represents the count of PBIs.

  •    Work Item Aging   Tracks the age of in-progress PBIs. The x-axis lists all columns (states) of the process and the y-axis represents how long each PBI has spent in that column. The dots represent the number of PBIs that spent that many days in that column.

Some flow metrics may be calculated from more than one analytic. For example, Throughput can be calculated from the Cycle Time scatterplot, Throughput run chart, as well as the Cumulative Flow Diagram. Table 9-1 lists which flow metrics can be calculated from which analytic.

TABLE 9-1 Analytics and the flow metrics that they can calculate.

Analytic (Chart/Report)

WIP

Cycle Time

Work Item Aging

Throughput

Cycle Time scatterplot

 

 

Throughput run chart

 

 

 

Cumulative Flow Diagram (CFD)

Averages only

 

Work Item Aging

 

 

 

All of the core data is already in Azure DevOps, and these metrics are available in various, automated ways. As I’ve previously mentioned, the Analytics service in Azure DevOps provides the ability to query all kinds of data. Dashboard widgets, in-context reports, OData feeds, and custom extensions are all mechanisms a team can use to query these metrics.

As of this writing, Azure DevOps provides three Analytics widgets that are of interest to our conversation about flow metrics:

  •    Cycle Time widget   Displays the Cycle Time of work items closed in a specified timeframe for a single team and backlog level (such as Product Backlog).

  •    Lead Time widget   Displays the lead time of work items closed in a specified timeframe for a single team and backlog level.

  •    Cumulative Flow Diagram   Displays the cumulative flow of items based on the timeframe, team, and backlog level.

In addition to referencing the easy-to-use widgets and in-context reports, teams can query the Azure DevOps OData feed, pulling pre-aggregated data directly from Azure DevOps. Microsoft Power BI can consume OData queries, which return filtered and aggregated sets of data in the form of a JSON data payload.

For example, running the following OData query in the browser will return the Cycle Time for work item #42:

https://analytics.dev.azure.com/scrum/fabrikam/_odata/v3.0-preview/
WorkItems?$filter=WorkItemId%20eq%2042&$select=WorkItemId,Title,WorkItemType,State,CycleTimeDays

The returned JSON shows the calculated Cycle Time:

{"@odata.context":"https://analytics.dev.azure.com/scrum/fabrikam/_odata/v3.0-
preview/$metadata#WorkItems(WorkItemId,Title,WorkItemType,State,CycleTimeDays)","value":
[{"WorkItemId":42,"CycleTimeDays":5.3333333,"Title":"PBI 42","WorkItemType":
"Product Backlog Item","State":"Done"}]}

For information on how to pull Cycle Time using OData, visit https://aka.ms/sample-boards-leadcycletime.

Unfortunately, Microsoft’s built-in analytics don’t provide all the necessary metrics, or at least they don’t provide them in an easy-to-consume way. I suspect that a few Azure DevOps practitioners have built their own analytics, consuming OData feeds, to provide the missing metrics, but nothing has been shared with the community that I am aware of. Instead of building your own, let me take a moment (again) to promote the ActionableAgile Analytics extension. Not only is ActionableAgile Analytics the world’s leading agile metrics and analytics tool, but it plugs right into Azure DevOps and provides analytics and metrics from within Azure Boards. You can see this in Figure 9-8.

A screenshot of the Throughput Histogram analytic in the ActionableAgile Analytics extension. A drop-down shows a dozen other views and analytics available.

FIGURE 9-8 The ActionableAgile Analytics extension offers many views and analytics.

Flow-Based Scrum Events

The Developers will use the existing Scrum events to inspect and adapt their definition of workflow and thereby help improve empiricism and optimize the value they deliver. What about the inverse? Will the Kanban practices—specifically the various flow metrics—impact Scrum events? Although the events aren’t changed by introducing complementary Kanban practices, they will be impacted. Scrum is still Scrum and it doesn’t change. The Scrum Guide in its entirety still applies. How the team practices Scrum may change.

Practicing Kanban in a Scrum context does not require any additional events. However, using a flow-based perspective and the use of flow metrics in the Scrum events strengthens Scrum’s empirical approach. In this section, I will go through each of the Scrum events and discuss briefly how the introduction of Kanban complementary practices might impact them. I refer to each as a “flow-based” event.

The Sprint

Kanban complementary practices don’t replace or diminish the need for Scrum’s Sprint. Even in environments where continuous flow is desired or achieved, the Sprint still represents a cadence—a regular heartbeat for inspection and adaptation of both product and process. Teams using Scrum with Kanban use the Sprint—and its events—as a feedback improvement loop by collaboratively inspecting flow metrics and adapting their definition of workflow.

Kanban practices can help the Developers improve flow and create an environment where decisions are made just-in-time throughout the Sprint based on inspection and adaptation. In this environment, those Developers rely on the Sprint Goal and close collaboration with the Product Owner and stakeholders to optimize the value delivered in the Sprint.

Some teams might be inclined to toss the Sprint. They see it and its events as a constraint. For complex, plannable work, I see the Sprint as a way to ensure that the Scrum Team sets a Sprint Goal and does some level of planning and retrospecting on a regular cadence. Besides, humans are hardwired for regular cycles. Sprints provide a container for self-management and experimentation. Without Sprints, how would a team plan on communicating with stakeholders and when would a team stop, reflect, and improve? The Kanban answer is “whenever,” which I typically see implemented as “hardly ever.”

Flow-Based Sprint Planning

Sprint Planning initiates the Sprint by laying out the work to be performed for the Sprint. This resulting plan is created by the collaborative work of the entire Scrum Team. The Product Owner ensures that attendees are prepared to discuss the most important Product Backlog items and how they map to the Product Goal. The Scrum Team may also invite other people to attend Sprint Planning to provide advice.

Flow-based Sprint Planning remains the same although it will have potentially more inputs and historical measures that may make it easier to create a forecast. Flow-based Sprint Planning uses flow metrics as an aid for creating the forecast and developing the Sprint Backlog. For example, the Developers may want to use their historical flow data for predictability.

One of the inputs to Sprint Planning is past performance. Many Scrum Teams use velocity, but in a flow-based context, the Developers can use their Throughput history to help create their forecast. Remember, Throughput is a count of PBIs, not a sum of their story points. A team would need to use a Cycle Time scatter plot, a Throughput run chart, or a Cumulative Flow Diagram to calculate Throughput.

Tip Sprint forecasts can be improved by using Monte Carlo. Monte Carlo simulations can replace traditional practices when forecasting multiple PBIs—such as during Sprint Planning. Figure 9-9 shows the results of 10,000 Monte Carlo simulations in the ActionableAgile Analytics extension. The chart shows the probability of completing a specific number of PBIs in a set period of time (7 days in this case). In this example, there is an 85% chance that the Developers will complete 11 items or more in 7 days. Remember, there is never 100% probability. There is always uncertainty. For a demo, visit www.actionableagile.com/analytics-demo. Regardless of what the simulation suggests, the Developers still have the final say on what PBIs get forecasted and pulled into the Sprint Backlog.

A screenshot of the ActionableAgile Analytics Monte Carlo simulation results of 10K trials. Lines show that within the next 7 days, 20 items will be completed with 50% probability, 15 items with a 70% probability, 11 items with an 85% probability, and 7 items with a 95% probability.

FIGURE 9-9 Monte Carlo simulations help improve multi-PBI forecasting.

Let’s assume the Developers have been able to complete between 4 and 20 PBIs per Sprint over the past 8 Sprints. They might calculate their Throughput to be 10 PBIs per Sprint, with a satisfactory level of probability (say, 85%). Assuming the workflow and policies haven’t changed for this Sprint, the Developers could simply forecast the top 10 PBIs. This would imply that those PBIs fit the Sprint Goal or that a coherent Sprint Goal could be crafted for those PBIs.

But what about the size of those PBIs? Surely small PBIs will have a lower Cycle Time than large PBIs. While true, remember that Throughput has been calculated on multiple Sprints of actual PBIs of varying sizes and complexities. Probability suggests that upcoming work (in the same domain, developed by the same Developers, using the same technologies, with the same tools) should be of similar distribution.

Some Scrum Teams might spend time making sure that each PBI is about the “same size,” but that feels like waste to me. I don’t think the Product Owner or Developers should be forced to right-size their PBIs. They may want to for other reasons—to keep all Cycle Times to within a few days or less and to improve continuous delivery, for example—but for the sake of forecasting, the Throughput metric is smart enough to handle variation.

As an alternative to traditional right-sizing of a PBI, a Scrum Team may want to perform a quick check to see if each PBI is less than their SLE. This quick “estimate” will tell the Developers whether a PBI might exceed their expected Cycle Time. If so, then they might want to break it down into smaller PBIs. This can be done at Sprint Planning or during Product Backlog refinement, and it should take only a few moments to ask and answer that single question. For more information on Throughput-driven Sprint Planning, refer to fellow Professional Scrum Trainer Louis-Philippe Carignan’s article at www.scrum.org/resources/blog/throughput-driven-sprint-planning.

Note For Scrum Teams wanting to practice Kanban and enjoy flow-based Sprint Planning, they will probably not be creating their Sprint plan in the form of tasks. The flow metrics generated by Azure DevOps assume that the Scrum Team will be using the Kanban board. If you want flow metrics to be calculated by Azure DevOps Analytics or the ActionableAgile Analytics extension, you’ll need to use the Kanban Board. The good news is that the Kanban board supports the association of Task work items (as well as Test Case work items) with each PBI, as you can see in Figure 9-10. These tasks don’t drive the workflow—they are just helpful reminders of what activities are required for that PBI.

A screenshot of the Kanban board, focusing on the Coding column. The “Twitter feed” PBI indicates 5 associated Task work items and 5 associated Test Case work items. The tasks are listed with checkboxes next to them. The “Sign up for account” and “Install SDK” tasks are checked off.

FIGURE 9-10 PBIs on the Kanban board can have associated Task and Test Case work items.

For Developers who want to continue to use the Taskboard, it would be theoretically possible for an innovative developer to calculate Cycle Time based on the elapsed time between when the first Task work item for a PBI was pulled into the To Do column and when the last Task work item was pulled into the Done column (or when the PBI was set to the Done state). There would be a lot of nuance to work through here and maybe I’ll get around to building something someday—or you can.

Flow-Based Daily Scrum

A flow-based Daily Scrum focuses on ensuring that the Developers are doing everything they can to maintain a consistent flow. Although the goal of the Daily Scrum remains the same as outlined in the Scrum Guide—to focus on progress toward the Sprint Goal and produce an actionable plan for the next 24 hours—the meeting itself takes flow metrics into consideration and focuses on where flow is lacking and on what actions the Developers can take to get it back.

Some Developers prefer to have their Daily Scrum in front of their Kanban board. This way, they can “walk” the board from right to left, focusing on PBIs (not people) while discussing the current state of work and what happened during the last 24 hours, as well as what is likely to happen in the next 24 hours. My only caution is to make sure that Developers are talking to one another (and listening) and not just focusing on the board. Scrum Masters may need to keep an eye on this.

Here are some inspections that can occur at a flow-based Daily Scrum:

  •    What do we need to get the work closest to the right to Done?

  •    What work is blocked and what can the Developers do to unblock it?

  •    What work is flowing more slowly than expected and why?

  •    What is the Work Item Age of each PBI in progress?

  •    Have any PBIs violated or are about to violate their SLE, and what can be done to get that work completed?

  •    Are there any factors that may impact the Developers’ ability to complete work today that are not represented on the board?

  •    Is there any work that has not been made visual?

  •    Are the Developers pulling work to fulfill WIP limits, or is there extra capacity?

  •    Have the Developers broken any WIP limits?

  •    If WIP limits have been exceeded, what can be done to ensure that WIP will be completed?

  •    If WIP limits have been exceeded, can the Developers swarm to get their WIP under control?

  •    Are WIP limits set correctly?

  •    Have the Developers learned anything new that might change the plan for the next 24 hours?

  •    Are the Developers finishing work before starting new work?

  •    Are there any impediments to flow?

Although some Developers prefer to use a burndown chart to monitor progress, flow-based teams may find them ineffective. Sprint burndowns typically don’t represent flow because they are based on tasks (work invested and work remaining). They don’t provide transparency to flow or provide an actionable path toward achieving the Sprint Goal. Instead, the transparency provided by the Kanban board itself is typically enough for most flow-based teams.

The Work Item Age metric provides additional transparency to specific PBIs that are struggling. Developers should use this metric and compare it to their SLE. Otherwise, they can suffer poor flow. Developers using flow-based metrics can use a Cumulative Flow Diagram to provide transparency of their flow. If the Developers really desire to use a burndown chart, they should use one that uses PBI work items as opposed to Task work items. Fortunately, the Burndown analytic in Azure DevOps supports both configurations.

Blocked Work

Blocked work occurs whenever the Developers have to wait for someone or something before work can proceed. This may include waiting for someone to solve a problem or provide information. It may also include waiting for something like software to be installed, hardware to be configured, or a test environment to be provisioned. Blocked work does not include PBIs that are in the Done sub-column, waiting to be pulled to the next column (for example, Design-Done waiting to be pulled to Coding-Doing). Blocked work, however, is still counted against the WIP limit.

When a PBI gets blocked—for whatever reason—the Developers should not simply put that blocked item aside and start working on something else. This may be a human response, but it’s definitely not flow. This is one reason for not having a “Blocked” column on the Kanban board—to discourage this behavior.

If the Developers want to visually indicate that a PBI is blocked, I recommend adding a “Blocked” tag to the PBI work item. The Kanban board can then be configured to show tags and even style them and the card backgrounds based on rules. You can see the results of these styling configurations in Figure 9-11.

A screenshot of the Kanban board, focusing on the Coding column. PBI #8473 “Twitter feed” shows its “Blocked” tag in bold, with a dark-red background. Also, the PBI card itself has a light-red background.

FIGURE 9-11 Blocked work should be made visible on the Kanban board.

While I’m on the topic of styling cards on the Kanban board, another style to consider is one that indicates which cards are aging beyond a certain threshold. Figure 9-12 shows an Aging rule being created to highlight those cards that have not changed within the last two days. Additional styles can be added for different depths of coloration (based on more aging). This won’t generate a proper Aging analytic, but it will give the Developers a quick visualization of which PBIs aren’t moving. Remember that Work Item Age is a leading indicator of whether or not the PBI will miss the SLE.

A screenshot of the Board settings dialog. The Styles page is selected and a new style named “Aging” is being added which defines a medium beige background color for cards with a Changed Date less than or equal to “@Today – 2”.

FIGURE 9-12 Styling PBIs that haven’t changed in the last two days.

Blocked work impedes flow, accumulates Cycle Time, and should be addressed. The Developers should discuss when aging items are considered to be blocked and for what reasons. They should also discuss if/when to violate WIP limits in these situations. Even though starting new work is almost never the right answer, there may be times when it is. The Developers should also discuss when the item should be kicked out of the system—whether put back on the Product Backlog, renegotiated and re-created, or dropped.

Tip Professional Scrum Developers commit to doing everything they can to unblock work as expeditiously as possible. They closely monitor bottlenecks and workflow policies regarding the order in which they pull items through the system so that PBIs do not age unnecessarily.

Impediments usually break down into two categories: hindrance and obstruction. Both are bad and can limit flow. Although obstruction impediments (or “blockers”) need to be mitigated right away, both should be discussed at the Sprint Retrospective, where experiments can be put forth in an attempt to minimize or eliminate them in the future. All repetitive blockers—such as dependencies that go beyond the Scrum Team—should become the Scrum Master’s highest priority to help alleviate.

Flow-Based Sprint Review

The purpose of the Sprint Review is to inspect the outcome of the Sprint and determine future adaptations. The Scrum Team presents the results of their work to key stakeholders and progress toward the Product Goal is discussed. During the event, the Scrum Team and stakeholders review what was accomplished in the Sprint and what has changed in their environment. Based on this information, attendees collaborate on what to do next.

Inspecting flow metrics and related visualizations as part of the Sprint Review can create opportunities for new conversations about monitoring progress toward a goal. For example, by reviewing Throughput, the Scrum Team and stakeholders will have additional insight about likely scope and delivery dates.

As previously mentioned, Monte Carlo simulations can be used for forecasting when a set of PBIs might be completed. Stakeholders may appreciate the calendar view visualization provided by the ActionableAgile Analytics extension, as you can see in Figure 9-13. This view is powerful, showing red, orange, and shades of green based on the increasing probability of the remaining items in the release to be done by those dates. Stakeholders may require some explanation of flow metrics, probabilities, and Monte Carlo.

A screenshot of the Monte Carlo simulation results of 10K trials. The calendar view at the bottom shows the upcoming months – July through December. Days through mid-November are colored red. Mid- to late November are colored orange and then light green. December is all green, with a darkening gradient into the month.

FIGURE 9-13 The calendar view of a Monte Carlo simulation helps set stakeholder expectations.

Another item to discuss at the Sprint Review is the SLE. Stakeholders should know what the number is, what it means, and what the probability is behind it. If the SLE has changed, the Sprint Review is a good opportunity to announce that. Depending on the stakeholders, you may need to differentiate between SLE and SLA—which I contrasted earlier in this chapter. Be sure to remind them that “expectation” does not mean commitment or promise.

Some teams might be inclined to toss the Sprint Review—especially those who are practicing continuous delivery. They might see the meeting as a waste of time and would rather book time with stakeholders as needed. My first word of guidance to these teams would be to remind them that the Sprint Review is not an acceptance meeting. Scrum absolutely allows PBIs to be released to production as soon as they are Done—if the Product Owner desires. The Sprint Review is an opportunity for the stakeholders to inspect the Done, working product. It’s also an opportunity to hear back from the stakeholders about the state of the market and other factors affecting the product and its goals.

I do like the idea of “booking time” with stakeholders regularly. Having this day-to-day interaction where stakeholders can inspect and adapt is fantastic, and it will result in a higher-quality product. These meetings don’t replace the value of the Sprint Review as an opportunity to take a step back and inspect and adapt the whole Increment, the Product Backlog, and the Product Goal with stakeholders based on changes to the business and market.

Flow-Based Sprint Retrospective

The purpose of the Sprint Retrospective is to plan ways to increase quality and effectiveness. The Scrum Team inspects how the last Sprint went with regard to individuals, interactions, processes, tools, and their Definition of Done. The Scrum Team discusses what went well during the Sprint, what problems it encountered, and how those problems were (or were not) solved. The Scrum Team identifies the most helpful changes to improve its effectiveness.

A flow-based Sprint Retrospective adds the inspection of flow metrics and analytics to help determine what improvements the Scrum Team can make to its processes. The Scrum Team can also inspect and adapt its definition of workflow to optimize flow in the next Sprint—just beware of making too many changes at a time. Using a Cumulative Flow Diagram to visualize the Developers’ WIP, average approximate Cycle Time, and average Throughput can be valuable during the Sprint Retrospective. This is also an opportunity to discuss what work was blocked, why it was blocked, and how to reduce or remove those blockers in the next Sprint.

Smell It’s a smell when I see a team or a Developer wait until the Sprint Retrospective to make inspections or improvements. Although the Sprint Retrospective is the formal opportunity—so that it happens at least once per Sprint—a Professional Scrum Team should consider taking advantage of process inspection and adaptation opportunities as they emerge throughout the Sprint. Similarly, changes to a Scrum Team’s definition of workflow may happen at any time. Because these changes will have a material impact on how the Scrum Team performs, changes made during the regular cadence provided by the Sprint Retrospective event will reduce complexity and improve focus, commitment, and transparency.

Chapter Retrospective

Here are the key concepts I covered in this chapter:

  •    Flow   The movement of customer value through the product development system.

  •    Workflow   The approach to turning PBIs into Done, working product. The definition of workflow represents the Developers’ explicit understanding, which will improve transparency and enable self-management.

  •    Kanban board   An interactive visualization of the team’s workflow. Not to be confused with the Taskboard. Azure Boards supports both.

  •    Work in Progress (WIP)   The number of PBIs started but not yet finished. WIP can be assessed for the entire board or for a specific column/state.

  •    WIP Limits   Policies enacted by the Developers to reduce the amount of WIP per column/state. Enabling WIP limits creates a pull system and improves flow.

  •    Flow metrics   Metrics based on how the Developers work, using easy-to-understand units of measure such as days and item counts.

  •    Cycle Time   A flow metric representing the amount of elapsed time between when a PBI starts and when it is Done. Cycle Time is a lagging indicator and best answers the question “How long will it take this PBI to complete?”

  •    Work Item Age   A flow metric representing the amount of time between when a work item started and the current time. This leading indicator applies only to items that are still in progress. For Scrum Teams, PBIs in the Product Backlog are not considered to be aging.

  •    Throughput   A flow metric representing the number of work items finished per unit of time, such as a Sprint. Throughput is a lagging indicator and best answers the question, “How many PBIs will I get by the end of the Sprint or in the next release?”

  •    Cycle Time scatterplot   An analytic that informs how long it takes to complete PBIs. The x-axis represents the timeline and the y-axis represents Cycle Time in days. Dots represent the intersection of a date and the number of days (Cycle Time) that it took that PBI to complete (for example, move to the Done column).

  •    Throughput run chart   An analytic that determines how many PBIs have been completed (Done) over a period of time. The x-axis represents the timeline and the y-axis represents the number of PBIs completed on that date.

  •    Cumulative Flow Diagram (CFD)   An analytic that shows the count of PBIs in each column for a period of time. From this chart you can gain an idea of the amount of WIP, average Cycle Time, and Throughput. The x-axis represents the timeline and the y-axis represents the count of PBIs.

  •    Work Item Aging   An analytic that tracks the age of in-progress PBIs. The x-axis represents all columns of the process and the y-axis represents how long each PBI has spent in that column. The dots represent the number of PBIs that spent that many days in that column.

  •    Service level expectation (SLE)   A forecast of how long it should take any given PBI to flow from start to finish within the team’s workflow. The SLE itself has two parts: a period of elapsed days and a probability associated with that period. The Developers can use their SLE to find active flow issues and to inspect and adapt in cases of falling below those expectations.

  •    ActionableAgile Analytics   The world’s leading agile metrics and analytics tool, with an extension for Azure DevOps.

  •    Flow-based Sprint Planning   Sprint Planning, as defined in the Scrum Guide, but using flow metrics (such as Throughput) as an aid for developing the Sprint Backlog.

  •    Flow-based Daily Scrum   Daily Scrum, as defined in the Scrum Guide, but with additional inspections to ensure that the Developers are doing everything they can to maintain a consistent flow.

  •    Flow-based Sprint Review   Sprint Review, as defined in the Scrum Guide, but including the inspection of flow metrics as a way to monitor progress and predict likely delivery dates.

  •    Flow-based Sprint Retrospective   Sprint Retrospective, as defined in the Scrum Guide, but including the inspection of flow metrics and analytics to help determine what improvements the Scrum Team can make to its workflow and processes.