2.

AGILE TRANSFORMATIONAL PRACTICES

Chapter One showed how transforming to a digital business is critical for your corporation, your role, and probably your career. You’ve even done some strategic work to think through what your digital future might be, and, if there are already internal discussions about digital transformation, hopefully you’ve earned enough credibility to get a seat at the table.

This chapter and the following ones are to help transform the IT organization of old to one that’s ready to handle new digital responsibilities and change the organization at a significantly faster pace than what it has performed in the past. I will share secrets to aligning with business need and developing differentiating capabilities that drive business growth and cost efficiencies.

My methodologies have several disciplines that ultimately roll up to a strategic set of digital and technology practices, but I will present them one practice at a time. Once we review them, I will roll them together so that you will see how they all drive digital and reinforce smart-fast execution, strategic thinking, culture change, and development of new business capabilities.

Understanding Agile Practices

Many leaders already have a good understanding of agile. Some have scrum or kanban running with multiple teams. But even when I meet transformational leaders that have some agile practices running in their organization, many have not experienced an agile culture or leveraged agile in transformation efforts.

This chapter will cover the key concepts and practices that agile leaders need to understand. I will outline some key tenets of agile governance, then go into agile planning and estimation. The chapter continues with an introduction to foundational technology practices and ends with a definition of release schedules.

Why Agile?

Let’s say you know there is a business opportunity and demand for a new application. You want to get momentum to define a project and be a partner to determine whether there is a strong business case to implement it. The first thing to do is articulate a vision—what you’re trying to accomplish (definition of business need and opportunity) and why (what benefits, both financial and performance driven). For this example, let’s simplify your options to getting this initiative started given that it’s just an idea without a complete vision.

OPTION 1—LET’S CALL THIS THE CLASSIC APPROACH

You go back to the sponsors, assign a business analyst, listen to what is required, and work on requirements and a project plan. It takes a couple of months to have all the meetings with stakeholders to gather requirements, document, and review.

At the end of the process, you ask your engineering team to estimate a solution, but under business pressure to respond, you allow them only a couple of weeks to complete it. The engineers do the best they can at outlining a technical approach and laying out a whole list of assumptions. The technologists feel pressure to provide an inexpensive solution, but they would prefer to use this opportunity to phase out a legacy system.

Your business stakeholders receive a document outlining a solution and assumptions, and some show up to the business review. But their level of understanding of the details is limited given their expertise. At the end of the day, they skip all this detail and look at the bottom-line cost and duration to execute. The engineers have recommended a minimal solution that can be implemented in six months, but the sponsors haven’t read the details and assume they are getting everything they want in a reasonable time frame.

The team is commissioned to work, and all is good until you are two months into the project and things are starting to go south. The project is taking longer than expected, and the team is falling behind. At the same time, market conditions are changing, and the sponsors are asking for new things added to the scope. Your team adjusts, but by month three they are further behind and you must extend the timeline. By month four, there are more changes, and the team is getting frustrated having to modify things already developed while sponsors are getting nervous on whether the project will complete at all to their expectations.

This seesaw of changing requirements and implementation complexities carries on unless the team and sponsors decide to work collaboratively toward a common milestone. In many cases, this doesn’t happen.

This project methodology where requirements are defined before implementation, known as waterfall is a less successful methodology in an environment where sponsors are challenged to define requirements fully, where changing market requirements require reevaluating implementations, and where the urgency to deliver products faster requires engineering teams to make compromises in their solutions.

OPTION 2—LET’S CALL THIS AGILE

Now let’s look at the same scenario but implemented from the ground up using agile practices. We’ll start off from the same point, where there is only the identification of an opportunity or need but with no other details. Instead of throwing an analyst to work with sponsors on “requirements,” imagine bringing a small team of experts together to brainstorm a vision. That vision focuses on a future point with a defined statement on the opportunity, needs, and benefits. By bringing the team together, there is a shared understanding of what the opportunity is and why it’s beneficial.

The agile methodology requires assigning a team to work with the product owner to identify the most business-critical aspects of the project, the technical risks, and other unknowns. The team then commits to completing deliverables in a relatively short period, usually one to three weeks. After this period, the team demos their work and then focuses on the next set of critical items for the one- to three-week period.

Obviously, I’m grossly simplifying but want to point out a couple of key differences. In waterfall, the project team must engage the business manager for a significant amount of time to define requirements. Projects are broken into milestones that have no specified rhythm of delivery (some milestones may be weeks, others months) and no requirement or expectation to demo functionality.

In agile, leadership is getting several significant advantages. Teams can start working on the most critical features and risky technical areas without overtaxing the business sponsors for upfront information. The frequency of delivery in one- to three-week “sprints” leads to better execution. Let’s say your team does sprints that are two weeks in duration. In three months, they complete six iterations, giving them plenty of time to prove themselves, mature the agile process, and address risks. Agile then allows sponsors to prioritize at the beginning of each sprint, enabling a stronger business and IT alignment. The sponsor gets to prioritize based on customer feedback, and the IT team gets frequent and direct engagement from the business sponsor.

Why Agile Is a Key Transformational Practice

When you sign up to lead transformation, the implication is that you are going to review existing products, business processes, and capabilities and realign to a new vision. Transformation is a change management practice, so the organization must enable a culture, philosophy, governance, and practice to manage the change.

Unfortunately, we no longer live in a static world. We can’t portray our digital business future with certainty since so many market forces are transforming in parallel. So thinking that you can manage transformation the same way we construct buildings, bridges, and rockets in the past is outdated thinking. In fact, construction projects are now leveraging elements of agile1 and lean to enable greater flexibility.

For some organizations, just adopting basic agile practices is good enough to achieve a higher level of execution. These organizations will define their sprint and release schedule, practice standups, and use tools to document user stories. Even at larger scales, just adopting these basic practices provides value as it aligns business stakeholders and provides flexibility to adjust priorities.

But to transform organizations, you need to evolve agile beyond these basic practices into a disciplined scalable process, a practice that connects other functional areas such as marketing and operations, and organizational change to drive to an agile culture.

The rest of this chapter will provide these foundations.

Defining Agile Roles and Responsibilities

A key tenet of the agile manifesto2 relates to self-organizing teams: “The best architectures, requirements, and designs emerge from self-organizing teams.” There’s a wide range of interpretation of what “self-organizing teams” means. The problem with teams that are too self-organizing is that they may lack a full understanding of the business and technology strategy, and the lack of governance may lead teams astray. However, teams that are managed tightly, don’t ask questions, are slow to think out of the box, or get lost working collaboratively finding solutions with ill defined requirements may never achieve results that can drive transformation.

I have found that teams need basic roles, responsibilities, and governance to be defined in order to be successful. Unlike startups, most organizations fill their agile teams with a mix of people, some with agile experience others with minimal exposure. Some organizations rely heavily on offshore or distributed team members, and some larger projects have the complexity of requiring multiple teams. In all these scenarios, you should provide structure to be successful.

SINGLE TEAM AGILE ROLES AND RESPONSIBILITIES

Structure in agile practices starts with defining basic roles and responsibilities. Some of the constructs laid out in this section are standard for agile teams like the role of the product owner and technical lead. But beyond that, agile doesn’t formalize roles within the team. I have found that teams need these responsibilities defined to achieve an optimal cadence of delivery, and the structures need to address several different challenges.

What is Scrum in 30 seconds

Agile teams operate in sprints that are usually one or two weeks in duration and are asked to commit to completing a list of user stories. Each story articulates a requirement, how it benefits the user, and a list of pass/fail acceptance criteria. Stories are typically grouped together into epics to make it easier for team members to navigate and manage the entire backlog of stories. At the end of a sprint, the team demos their completed stories, and the product owner evaluates their completeness. Issues are characterized as defects if they impact users, miss business rules, or fail acceptance criteria. Technical debt is also captured when the team acknowledges that improvements are needed in the underlying implementation.

Teams assign a size to each story and then attempt to commit to a consistent velocity of stories, measured as the aggregate of the sizes, every sprint. They use daily meetings, called standups, to communicate status and raise any blocks that might impede their ability to complete stories. At the end of a sprint, teams usually call a retrospective meeting to discuss process improvements. Releases to production can be done every sprint or over multiple sprints.

One issue is that the product owner and technical lead have significant responsibilities in agile practices but don’t always have the skill or time to complete all of them. Product owners are asked to write stories, but they may not be skilled on writing requirements. Technical leaders can become roadblocks if they take on solving all the technical challenges in the backlog.

Another challenge is that many agile teams today are often distributed and may not be in a single location. Distributed teams can arise for different reasons. You may have a business team in one location and technical team in another. You might be working with one or more services providing development or testing team members at different locations. For smaller organizations, you might have fully remote individuals working from home. Very large organizations may attempt to implement 24-hour development lifecycles by distributing teams across the globe. Whatever the structure, roles, responsibilities and practices need to be defined and adjusted to the realities of distributed teams.

The last challenge is how teams balance current sprint execution and future sprint planning. Teams that completely focus on executing the current sprint’s commitments can fail to respond to business stakeholder needs to forecast longer-term timelines and roadmaps. I’ll be sharing agile practices to enable planning, estimation, roadmapping, and forecasting, but for now the team structure needs to enable both in-sprint execution and future sprint planning.

So, with these challenges in mind, I have used the same team structure shown in Figure 2-1 with the following responsibilities:

imageProduct ownerDefines the vision of the product, the release and the epic. Prioritizes epics and stories. Reviews story definition and acceptance criteria. Reviews all estimation and seeks out minimal viable solutions. Reviews and signs off on completed stories. Aggregates customer and operational feedback to reevaluate priorities.

imageBusiness analyst (BA)Primary responsibility is to write stories, document requirements, and define acceptance criteria. Acts as moderator in discussions between the product owner and technical lead in reviewing implementation options and seeking minimal viable solutions. Answers questions from the development team during sprints to ensure there is a shared vision on what and how to implement. The business analyst is often the best person to lead other documentation efforts.

imageTechnical lead (TL)Leads the delivery of the product. Partners with the product owner to define solutions. Breaks down epics to stories and partners with the business analyst contributing nonfunctional requirements and technical acceptance criteria. Leads review of completed stories for completeness and adherence to technical standards. Leads retrospective and captures technical debt. Also responsible for getting the team to a predictable velocity and for recommending a balance of functional, technical debt, and defect-fixing stories that lead to on-time releases.

imageDev 1, Dev 2, QA 1Represent members of the development team who are responsible for reviewing stories, sizing them, committing to getting the sprint, and ensuring that all stories are “done” at the sprint’s completion. If you adopt the agile manifesto to its essence, the requirement is for shippable code at the end of every sprint.

image

Figure 2-1. Roles on a small agile team

The agile planning team should spend most of its effort working on priorities, vision, and requirements for the upcoming sprints. Ideally, they are involved only in the current sprint to answer questions and to review the results. The agile development team, on the other hand, is primarily working on the current sprint. When they get toward the end of the sprint, they must dedicate part of their time to review the next sprint’s priorities, ask questions, size stories, and commit.

This team structure and role definition also work for a distributed team where the agile planning team is collocated with the business and the agile development team is near shore or offshore, either as a captive or working with a service provider. In that situation, one of these developers should have leadership skills to lead the team through story-sizing meetings, commitment, daily standups, and retrospectives. In addition, if a service vendor is being used, it can be beneficial if the business analyst or the technical lead is from the service provider, but I wouldn’t recommend both being staffed by them.

You’ll notice that I’ve avoided using the scrum master role that is usually associated with leading agile scrum teams. I’ve never assigned a scrum master role in my agile organizations and prefer distributing their responsibilities across the team. If the organization is transitioning from other practices to agile and already has project managers, then it can be beneficial to assign them most of scrum master responsibilities. This is especially needed if existing business analysts and technical leads have little exposure to agile, have limited leadership skills, or have little practice working directly with product owners and business stakeholders.

However, it is very feasible to run a small agile organization without dedicated project managers. This can be realized if the business leaders and stakeholders can live with limited reporting and financial governance, which is why most startups succeed with agile without project management. In a small-team framework, the traditional responsibilities of a project manager are absorbed by the agile planning team, so for example, project reporting, communications, and financials are managed by the product owner, while resource management and risk mediation are managed by the technical lead.

LARGER TEAMS

Sometimes a program requires a larger team either because it requires multiple technical skills or the software is highly modular, and an increased velocity can be achieved by adding people. Figure 2-2 illustrates that this can be achieved but within limits. The team in that figure is the largest that I would recommend under most circumstances and the added overhead is a QA lead to ensure that testing is properly conducted. It’s very important that one of the developers other than the technical lead take responsibility to oversee in-sprint activities like reviewing stories, conducting standups, completing commitments, and escalating blocks.

image

Figure 2-2. Roles on a larger agile team

DISTRIBUTED TEAMS

If you are working with a development partner, then the agile team structure can easily support this model. In these cases, I recommend that either the BA or the tech lead come from your service provider, while the other ideally should be an employee. I’ve done it either way, depending on what skills were available, so if I had a strong technical lead, then I would pull a BA from the service provider; other times I would request a technical lead and resource the BA internally. The latter scenario works well if you are adopting a new technology and an external service provider can provide the expertise to lead the adoption.

In a distributed team, I don’t recommend having both the BA and the tech lead filled by employees or by the service provider. If they are both employees, then it can be more challenging to bridge cultures and camaraderie, with members coming from the service provider, and teams often separate into “in house” and “outsource” subteams. Even if that doesn’t happen, teams must work a little harder bridging communications with the offshore team members.

Having both BA and tech lead coming from the outsourcer creates other challenges. If any of the other team members are employees, they must report to team leaders coming from the service provider, a working arrangement that isn’t trivial unless other employees are overseeing the team’s execution.

If the BA, tech lead, and all team members are coming from the service provider then this is essentially an outsourced team. Leaders looking to fully outsource need to consider developing service-level agreements and align on success criteria. Other considerations are documentation, knowledge transfer, and continuity if the team is being commissioned for only a fixed scope and duration of work.

The other form of distributed team is when team members are in multiple locations, time zones, and service providers. This is the most complex situation, and management needs to adopt the practice to the specifics of the situation. For certain, management will need to consider additional technologies like instant messaging, video conferencing, electronic whiteboards, and other collaboration tools to ensure that team members can collaborate efficiently. See Figure 2-3.

image

Figure 2-3. Filling people and skills on agile teams between employees and service providers

For a large team with a commissioned a QA lead, I have had success having this both onsite and offshore. When the team is just getting started and QA standards need to be developed, then having a QA lead working with the BA and tech lead can be beneficial. Once the technology and testing foundations are in place, having the QA lead offshore can ensure that the testing strategy is being executed properly.

MULTIPLE TEAMS

You may need multiple teams for several reasons. The most logical reason is you might have multiple products or projects with sufficient business need and rationale to dedicate separate teams to each effort. You might have geological separations where separate teams help minimize communication overhead. These are easy scenarios to carve out since there are few dependencies between teams.

You may also have products or projects that have logical separations where teams can work largely independently. It’s important to craft these teams to minimize the dependencies between them but also ensure that each team can deliver business values independently and ideally through their own release schedules. One example is a team developing APIs to enterprise systems, while other teams develop applications that leverage them.

Figure 2-4 provides a generic structure to support multiple teams that introduces several new managerial and governance roles.

image

Figure 2-4. Multiteam agile organization

The “delivery manager” oversees the work of multiple development teams and agile backlogs. She ensures that teams adhere to both technical and agile governance practices and standards. In situations where the delivery manager is seeing the entire enterprise’s programs, she will own the technical and development standards, roadmaps, and governance.

The delivery manager will likely elect a program staff that complements her skill set. At minimum, she’ll need someone to oversee elements of the technology architecture with the primary role of providing reference architectures and sprint-to-sprint guidelines to each team’s technical lead. In addition, depending on the expertise of the team running agile projects and the level of reporting, coordination, or collaboration required to business teams, she may employ an agile program manager and possibly an agile coach. The program manager should set agile practice standards, review implementations, and address program risks. When rolling agile out to multiple new teams, an agile coach with experience in developing agile disciplines, practices, collaboration, and culture may help accelerate the organization’s adoption.

Leveraging Agile Tools

With a structure in place, the next thing you should decide on is a set of management and collaboration tools. When I started in agile, the available tools were not mature, and it wasn’t a bad idea to go low tech with stickies, cards, boards, or online spreadsheets to manage a backlog and share requirements with a team. Today, you can lose two months of productivity just reviewing all the options.

For agile practices to mature, selecting a tool—even the wrong tool—is a must, and I highly recommend implementing one on day one. Why? Because whomever you select to be a part of the agile practice will pave the way for others to join and contribute. These early participants will define the practice and ideally configure the tool and workflows as the process matures. They will help shape where to collect metrics, which reports to leverage, and which disciplines to standardize on. By giving them a tool, you’re enabling them to scale the agile practice as the transformation program expands in scope.

There is a prevailing wisdom that states that you should select a tool that fits the practice. In agile, I disagree. By selecting agile and ideally some related practice like scrum or kanban, you’ve already selected the process, and what remains is fitting it to the organization’s structure, culture, and business need. My recommendation is to let the capabilities, workflows, and reports baked into the tool help shape the practice. The developers of these tools know agile and have learned many best practices from their customers that they then embed in their tools. Why reinvent the wheel?

YOU SELECTED A TOOL, NOW WHAT?

Tools can become chaotic messes unless you set some responsibilities on who owns the configurations and establish some governance on what elements to invest time into configuring. If you’re working with multiple teams, you’ll need some guidelines on what should be standardized across teams versus what you’ll empower individual teams to configure on their own. You’ll also want to take steps to make sure that teams and individuals are following the process and leveraging the tools as expected. In my experience, I’ve seen both issues—teams that invest too much time in their process and tool configuration and lose sight of serving the business need, while other teams lag usage to the extent that it inhibits execution. We’ll explore how to handle both situations.

Once you select a tool, assigning roles and responsibilities becomes important so that team members know who drives the configuration and in what areas. But how you assign the responsibility depends on what roles you have on the team, the number of teams, the skill of the team members, the time frame for scaling the practice, and logistical considerations.

Figure 2-5 shows how I assign tool and workflow ownership.

image

Figure 2-5. Roles and responsibilities configuring an agile tool for an initiative

Product owners should spend most of their time working with customers, sales, and technologists to shape the product. They may not be too technical and may also have limited experience with agile practices. For these reasons, I think it’s best to limit their role working with agile tools to areas directly tied to their primary responsibility regarding requirements and priorities.

Technical leads fall into two categories: (1) those that have some management skills and embrace disciplined agile through tools and (2) others who have less management and more technical skills. Regardless, I think the same principles apply as with product owners, and it’s better to optimize the technical lead’s time to working with the product owner, leading the team, and overseeing the technical implementation. Technical leads should be focused on estimating, providing technical requirements, and ensuring that the scope, feasibility, and timing of releases is realistic.

It’s fair to say that I’ve burdened the business analyst with many responsibilities, including ones more commonly aligned to project managers or scrum masters. If you have larger distributed teams, you can split some of these responsibilities with program managers.

You should adjust these priorities if you’re using service providers, depending on their skills and your level of trust with their abilities to develop core practices.

Figure 2-6 shows the governance if you are rolling out agile to multiple teams. These governance principles essentially define who owns the business rules, practices, and tool configuration.

image

Figure 2-6 Roles and responsibilities in managing releases

On my blog, I’ve published 10 best practices in configuring your agile project management tools.3 Once you have roles assigned and a tool selected, I would suggest reviewing these guidelines and coming up with an initial vision for the configuration. I then recommend teams to integrate practice maturity and tool definition into their agile plans. So, for example, you might start with coming up with guidelines on writing the summary line for stories in your first sprint, tagging conventions in later sprints, and reporting afterward.

Agile Planning Practices

So now with organization and team structure defined, roles and responsibilities outlined, let me provide more details on what I mean about agile planning and its key practices. My assumption is that you and the team have the basic agile process running with sprints, commitments, standups, demos, and retrospectives being practices.

To understand agile planning, let’s first look at some of the fundamentals of the agile development team from the vantage point of just getting started with the practice. This is a realistic scenario as most organizations that adopt agile start with a smaller pilot team and project and leverage their wins to grow the practice.

How Agile Planning Practices Evolve

That pilot team works with the product owner to understand the vision even if it isn’t formally defined. They go through a process, usually a series of meetings and discussions, to formulate an initial set of priorities. This team makes a loose commitment on the priorities, largely because they don’t know their velocity but also because they haven’t formalized practices to write stories or commitment. In all likelihood and unless the team has employed an agile coach, the only thing that truly resembles agile in that first sprint is the notion of a team, a set of priorities, and doing the daily standup meetings.

Flash forward, and that team adapts more practices sprint by sprint. There is an emergence of a backlog that forces the product owner to prioritize. Teams make best effort to commit and get stories done, but some complications emerge over time. The team fails on getting a story “done” or on aligning to the product owner’s expectations, so at a retrospective, the team formalizes story writing standards to get a better shared understanding of requirements and acceptance criteria. In the next sprint, the team over commits and fails to complete all the stories, so they formalize commitment at the start of the sprint and begin measuring their velocity. While these are fundamental agile practices, it often takes a pilot team several sprints to realize their importance and to self-organize process improvements.

These two process maturities improve the team’s execution in a sprint. When the team knows what’s expected of them through acceptance criteria and has a better understanding of their velocity, the accuracy of their commitment and sprint execution (as defined by the number of stories achieving “done”) improves.

FROM EXECUTION TO ESTIMATING PRACTICES

It is pretty common for product owners to begin thinking one or more sprints ahead of the active one. But therein lies a problem. The product owner knows what she wants but needs help from the team to understand cost or complexity. You might hear the product owner say, “If I know this feature is too big or complex, I might not prioritize it or might simplify what I need.”

It’s at that point that the product owner is ready to engage the team’s leadership on a dialogue about solutions and tradeoffs. This is essentially the beginning of the product owner and technical lead working out some rudimentary estimation practices. This gives product owners a sense of cost and complexity, so they get a practice to simplify their requirements or can abandon requirements that are too costly to implement. For technical leads, sizing gives them a measured approach to manage commitment and velocity.

Two things happen while this team builds up their practice. First, executing the practice takes up more overhead. When the team started, the overhead was maybe a meeting or two to review priorities plus the 15 minutes of daily standups. Now there are commitment meetings, time spent to document stories, effort used to size stories, and practices established to monitor velocity. That investment in time diminishes what the team can use for actual development and testing, and the result creates a conflict. Should the team spend more time executing on the current sprint, or does it take on planning work for the next sprint?

The second thing that develops involves the questions and sometimes badgering from business leaders for timelines, project plans, scope “signoff,” or roadmaps. Some are used to getting these from waterfall project methodologies and haven’t let go of these constructs. Others grow impatient with agile and get frustrated when teams struggle to answer basic questions like, “What will you develop, and when will you deliver it?” Even when there is a product owner strongly aligned to agile principles, stakeholders will still require some forecast of what they will be getting when so that they can adequately plan business activities like marketing, training, and operational changes.

FROM BASIC ESTIMATING PRACTICES TO AGILE PLANNING

There is some reality to this need for timelines and scope because, even though the development team is agile, most likely their business teams are not so much. Marketing teams that are not agile need to plan the messaging, materials, and marketing plans associated with product upgrades. Operating teams need to determine workflow changes and consider training and documentation needs. Even when these teams culturally accept incremental releases and launching with minimally viable products, they still thirst for basic understanding of what they are getting. How can these groups endorse agile practices when the development team can’t respond to the basic question of what will be released and when?

It’s at this point when the product owner will likely feel external pressures to provide forward-looking projections. Telling stakeholders what will be delivered one sprint ahead is usually unsatisfactory to business leaders who want more definitive roadmaps. Now product owners usually don’t have an issue articulating a feature roadmap, but understanding feasibility, defining how something will work in practice, or providing realistic dates typically goes beyond their skill set and responsibilities in an agile practice.

Let me elaborate on these two points. I remember working with a very seasoned and successful agile coach back at Businessweek. He taught us the basics of agile practices, set up roles, and coached the team through their first sprints. He also seeded the foundation of our agile culture.

I was hired and started a couple of sprints into the process and was happy to see where we were but was immediately asked by the project sponsors to project dates and scope. I remember the coach fighting the need to estimate or to forward-project, claiming that it “wasn’t agile.” He was right because the team wasn’t ready to implement estimation and planning practices that lead to these forecasts. In addition, the business teams weren’t ready to operate properly with forecasts. If they interpreted a projection as a fixed-time, fixed-scope deliverable, then we might be practicing agile but failing to achieve agile thinking and culture. Beyond just agile practices, it’s agile thinking and culture that develop superior outcomes by adjusting priorities based on customer feedback, promoting innovation, and adjusting to practical realities.

I will tell you that the pressure never subsided. We needed agile planning practices, but they needed to be rolled out when the working team was ready to instrument and when the business teams were ready to partner with them.

The second point is a word of warning to product owners who elect to communicate scope and dates without feedback from their team and a defined practice on how these will be calculated. My history working with these product owners is that they often fail to deliver on scope and time and then will likely blame their team for the variances. The product owner’s primary responsibility is to represent the customer, set the vision, and guide priorities. While some product owners are “technical enough to be dangerous,” most of the ones I’ve worked with are nontechnical and have only a basic understanding of platforms, architecture, coding practices, and testing complexity. What looks simple on paper may turn out to be complex, and what is complex may also be simplified. Product owners generally don’t have the technical skills to evaluate feasibility or develop technical solutions. In addition, they generally don’t have project management skills that enable them to use data to forecast. Bottom line—many product owners generally don’t have the skills to forecast. Even the ones who do run the risk of alienating their teams if members are not involved in the forecasting process.

MANAGING THE PARADOX OF AGILE EXECUTION AND PLANNING

These two needs, the need to execute on the current sprint and the need to plan and forecast future sprints, create a difficult paradox for agile teams and leaders. Focus too much on the current sprint, and you’re likely to commit to getting more done at the expense of having stories written and estimates ready for future sprints. But to improve the accuracy of forecasting releases and roadmaps, time and effort need to be dedicated to understanding requirements, developing solutions, and producing estimates.

Teams that don’t dedicate this time for planning can fall victim to misalignment issues with their business stakeholders. If you underinvest in forward sprint planning and resort to high-level estimates, “T-shirt sizing” as one of my teams used to call it, then there is a strong likelihood of inaccurate forecasting that will undermine the culture changes required for agile transformation. Worse, it can lead to a backlash against using agile practices.

Teams should prove they can execute on agile development first before they can take on agile planning disciplines. But reporting, communications, planning, and forecasting are needed by most organizations, and teams need to develop these practices and then balance their efforts.

The team structure proposed in Figures 2-1 and 2-2 enables assigning planning and execution responsibilities. The agile planning resources are roughly 80% focused on future sprints and 20% on the current one, while the agile development team is 80% executing on the current sprint and 20% working toward the next sprint.

HOURS OR POINTS?

Now that you have a methodology, the question is whether to estimate and size in hours or to use “story points,” an artificial measure that often factors in complexity and effort.

Estimating in hours is almost always preferred by business stakeholders because they can translate it directly to cost. Unfortunately, this doesn’t always work in practice for a couple of reasons. First, developers struggle estimating their effort and time, especially when multiple skills are required to complete a story or when there are varying skill sets on the team. What might be 3 hours for the technical lead may be 12 hours for a more junior developer. What might be a 5-hour story to develop may require 10 hours of testing, some of which cannot be performed until the end of the sprint. Something that was sized at 20 hours because of some technical unknowns can sometimes be implemented in far less time if the risks do not materialize. So estimating development projects in hours has a high degree of variability.

Translating hours into cost has a second issue. It may provide a measure of development cost but not the underlying support costs. For example, something that required only 2 hours of development may yield a solution that’s so complex it leads to manual effort to implement or fix issues once it is deployed. Giving the business a direct translation of only the build costs without support considerations is a significant issue. It leads to quick builds and fixes with an underlying support complexity that is not formally articulated.

It is for these reasons that I never enable development teams to estimate and size in hours. I always lead them to measure in story points and ask them to come up with a rate card based on the Fibonacci sequence that measures both effort and complexity. So, for example, stories that are 1 or 3 points are low efforts and complexity, while a 21-point story has high complexity and effort. A team dialogue can help flesh out why a story has many points, and it’s the complexity factor that often gets reviewed with scrutiny. Why are the requirements driving to a complex solution? Is there a way to descope the elements that are complex? Is there a different solution that leads to less complexity?

I have also seen that estimating and sizing in points also leads to reasonable accuracy. Teams size stories based on their past experiences, so if a story has a similar implementation to something they have done in the past, they will estimate and size based on this knowledge.

Business leaders can still get at cost by dividing the cost to run the program over a period of time (a sprint, or a release for example) by the team’s total velocity over that period to get a cost per story point. Multiplying this by the epic’s size (calculated by summing all the story points for the epic’s completed stories) yields a cost for the epic. But now this cost has some expression of complexity in its measurement. There is, however, a downside. You can only use this method to calculate a cost once the stories are fit into a sprint schedule when you have an idea of what the team commits during a sprint.

Example: Calculating actual development costs.

Let’s consider a simple example of calculating costs with a single team, blended cost of $40,000 per 2-week sprint, and a 4-sprint release costing $160,000. In the release, they complete three epics totaling 200 story points with epic 1 requiring 40 points, epic 2 requiring 60 points, and epic 3 100 points. Epic 1 is thus 20% of the releases productivity and costs $32,000, while epic 2 costs $48,000, and epic 3 costs $80,000. The product owner could also look at a more detailed breakout on what capabilities were the most taxing by performing a similar analysis at a story level.

Just a note that my guidance on using story points for estimating and sizing is for development teams. If you are applying agile practices to operational teams, then hourly estimates often work well and may be required. These teams usually have a breakdown of tasks and can assign hours to them based on previous experiences. Operational teams are also less likely to take on too many risky or complex tasks. Teams that have these parameters don’t need to adjust for complexity as much as development teams, and their hours applied are often a crucial measurement for identifying process improvements.

Agile Estimation

If you’re going to promote a data-driven organization (more on this in a later chapter), then you better show up with some data. In this case, the role of the technology organization is to review vision and requirements and to provide estimates. The estimates, in the process that I endorse, are a way for teams and the product owner to have a dialogue on the vision, brainstorm on requirements, and discuss possible solutions. Each solution offered is a scenario, and they often have tradeoffs that might be hard to understand without building and testing. Estimates offer a blunt instrument to compare solutions at least by one measure, the cost and complexity to implement.

In my experience, estimating and the resulting dialogue between the product owner and the tech leads result in more optimal solutions. First, the dialogue itself leads to a better shared understanding of the customer need, priority of the product owner, and other related nonfunctional requirements. This upfront understanding leads to less questioning later or, worse, rework because it is more likely that a shared understanding of the complete requirements gets documented.

Equally as important and likely to happen through this dialogue is getting the product owner to prioritize. I’ve never met a product owner who will naturally ask for a minimal viable implementation. Many do the opposite and ask for everything that must be done and should be included, in addition to what is nice to have. And why shouldn’t they if all they are asked for are requirements without a dialogue on impact and options? Tell them that their requirements are 1,000 points with a team that averages 200 points every 2-week sprint and you’re likely see them reconsider their requirements. What if I left this out, or minimized this, or simplified that, would it lower your estimate? That’s exactly the dialogue I am seeking with the estimation process as it weeds out the nice-to-have capabilities without any development investment or stressful dialogue.

OVERCOMING ISSUES WITH ESTIMATING

Development teams will tell you that estimating is hard. They will want requirements spelled out in detailed stories before estimating them. They will be fearful to provide estimates and then held accountable when requirements change or when the implementation is more complex than forecasted. Some will inflate estimates to protect against these issues.

Other teams do the exact opposite and simplify. Without all the requirements spelled out and with limited time allocating to planning and estimating, they leave out architectural, quality, and other dependencies that can lead to underestimation.

A third and very practical issue is deciding how much time to invest in estimating. Teams that believe their estimates need to be super accurate tend to invest too much time in the process. In addition, these teams may overengineer their assumptions to complete the estimate.

Finally, there is the question of when and whether to engage the team on the estimation process. Are you going to engage only the leads of the team to enable a more efficient process or the full team to get their input? The latter is very important to get commitment, but engaging the whole team on estimations increases the expense to develop them.

ESTIMATING OBJECTIVES

These considerations are factored into this agile estimation process. Agile teams can’t overinvest their time in the process but still need to provide some accuracy to the estimates provided. The process needs to provide feedback to the product owner on cost and complexity, so that one objective is that not all the stories estimated by the team will be prioritized by the product owner for implementation. This is a good thing and a behavior we are seeking because we want the product owner to lower the priority of stories that provide marginal value but are high in implementation complexity or cost. This balancing of value versus complexity and adjusting priorities or scope is the first objective of estimating.

How many stories will be estimated but not commissioned for development because they are too complex? How many other stories will see radically simplified requirements? The answers vary with the type of project, the nature of the product owner, and the team’s technical strengths executing on the underlying technology platforms. In my experience, the number of stories left on the table or that change radically is significant. Knowing this, the question is how to make estimating extremely efficient when as many as 70% of stories will never get implemented once they are estimated? Does the team need to capture all the stories’ details to estimate?

The answer is no, if you can create a culture and a process of estimating with incomplete information. The reality is that developers never have perfect requirements, complete knowledge of legacy or integration considerations, or full mastery of the underlying technologies. They are always taking on a certain level of risk when they commit to stories even when they have optimal requirements and knowledge.

So the question is can the development team provide a basis for lightweight estimates that can be used by the product owner to prioritize and descope? My experience is that the answer is yes, provided the team has some history of working together and sufficient mastery of the underlying technologies. These estimates can be used for deciding on scope and priority, developing a first-pass schedule and for getting an early understanding of a release plan. On the other hand, these early estimates can’t be used for commitment or measuring velocity because that requires the added requirements and acceptance criteria.

For the rest of this chapter, I will refer to these as “estimates,” and they are often computed on a first-pass breakdown of an epic or feature into “story stubs.” These stubs are often the headline or summary of the story but lack complete detail or acceptance criteria. They are sufficient to help the team break down the business requirement and provide an estimate based on their previous experiences.

Epics or features?

Many organizations with large backlogs or multiple product teams will consider using epics to represent product themes that require multiple releases to complete. They then break down these epics to smaller features that can be implemented or upgraded within a single release. For simplicity, the estimation process described here is based on breaking epics directly into stories that are estimated. If you already break epics down into features, then use this process to break down and estimate one or more stories for each feature.

I said team, but to make this process hyper efficient, it is beneficial if the team delegates the estimating responsibility to the technical lead. Chances are the technical lead is the only one experienced with enough knowledge of the product owner’s intent and the full technology stack to be effective in providing estimates.

What if there are multiple ways to break down this epic into stubs, with each approach leading to a different solution with different tradeoffs? I call these scenarios, and if the technical lead (sometimes with some added help from key team members) can provide these scenarios, it leads to a better dialogue on scope and methodology to implement. So a second benefit of estimating is that it drives technologists to consider multiple solutions and to discuss tradeoffs.

What if the product owner sanctioned a large or complex epic for estimation? That’s okay, just break it down based on what’s known and understood. When an epic can only be partially broken down into stubs because there are too many technical or implementation unknowns, then the technical lead may have difficulty providing an estimate without some upfront investigation and even prototyping. Knowing there are technical risks to implement the business requirement is a third objective of the estimating process. If too much R&D is needed, then the product owner should take this as a warning that she is asking for something that has complexity and risk.

There is also a way to resolve these unknowns by commissioning story “spikes.” A spike is effectively an R&D story to develop or prototype a technical solution. If the team is given the priority to implement a spike, it can be used to address the unknowns in the epic. Estimates can be delayed until the more critical spikes are completed that help address the technical unknowns or risks.

So is estimating sufficient? What happens with the other 30% of stories that are commissioned by the product owner once stubs are estimated? We’re not done with this process because prioritized stubs need to have the full requirement articulated with acceptance criteria, supporting diagrams, and other standards you decide on for documenting stories. Once the story is fully written, does the early estimate still hold?

The answer most likely is no. As the story is written, requirements are stipulated that may lead to additional implementation factors and complexity. However, the additional requirements should eliminate ambiguities and may lower the estimate. Bottom line is that the estimate is no longer valid once there is a written story with fleshed-out requirements.

Once the story is fully written, reevaluate it and come up with a final “sizing.” The objective now is to engage the team to review all the elements of the story and come up with its size based on their understanding of both complexity and effort. I call this sizing because it’s a distinct step in the process and a different metric from the original estimate.

Comparing the story size versus estimate provides the product owner a second opportunity to evaluate priority and scope. They should question what led to higher sizes versus the estimate to see if the gap is required. Bottom line is they can make changes before the work is commissioned.

If the work is commissioned, this sizing can then be used by the team to make their commitment and measure velocity. This is because the team is fully involved in sizing, and it is developed off the fully documented story.

So, in summary, this is a two-step process:

1.Estimating on stubs to provide early guidance on cost and complexity with the goal to descope, lower the priority, or provide multiple solutions with tradeoffs

2.Sizing on fully written stories with acceptance criteria to be used as a second validation for priority and scope and then used for commitment and computing velocity

THE ESTIMATING PROCESS

Figure 2-7 shows the process I advocate.

image

Figure 2-7 Agile estimation on story stubs followed by sizing on fully written stories

imageTeams, departments, or organizations should define metrics and a process to derive or demonstrate business value. The product owner is expected to prioritize the high-level epic backlog based on the business value. Many organizations just force-rank these epics and look to develop business value measures only if needed.

imageOn seeing a new epic prioritized, the technical lead should break prioritized epics into story stubs. For “simple” epics, this usually isn’t a difficult task for experienced teams, but for more complicated ones, it may:

imageRequire sessions with the product owner to get some more details.

imageRequire breaking down the epic into smaller ones, some that might be easier to break down and be more important.

imageRequire some R&D (spikes) to help flesh out an approach.

imageFall out of scope for what the team (or teams) can perform—either by size, skill, or complexity. For larger organizations, the delivery manager needs to consider how best to either reassign or get other help for a solution.

imageAssuming the epic now has story stubs, the technical lead should assign estimates.

imageDelivery leaders will often review story stub estimates. Is the epic fully broken down? Are there architecture considerations? Should technical debt be addressed with the epic?

imageThe product owner should then review and has several options:

imageAccept the estimate and move the stubs to the backlog.

imageLower the priority of the epic or remove it from the backlog.

imageDiscuss the estimate to see whether assumptions were wrong or whether the epic’s definition can be simplified and yield a lower estimate. If this path is taken, the newly defined epic should be prioritized again on the epic backlog for a second estimate.

THE ONE-WEEK AGILE PLANNING SPRINT

So how do you implement this process in practice? There’s a lot of “it depends” in that answer—the number of teams involved in the program, the complexity of the project at hand, and the technical skills required to complete an epic. In simple terms, larger projects with more team members and more complex assignments are going to require a lot more time and coordination to estimate and size versus smaller projects with few teams and technical skills.

Let me introduce two other constructs. First, the concept of voting, which can be used to sort out conflicting priorities when there are multiple stakeholders with different needs. The second is the architecture review meeting, which is used to ensure that solutions are complete and have included all the technical and quality considerations.

Figure 2-84 diagrams estimating at its simplest level as a single one-week agile planning sprint.

image

Figure 2-8. One-week agile planning sprint used to stub, estimate, write, and size stories

imageDay One, Business stakeholder session—At a meeting of stakeholders (including product owners and senior technologists), three agendas are scheduled:

1.Epic backlog review—The product owner reviews status on the top features in the agile planning backlog.

2.Solution review—All epics that were estimated or sized over the last agile planning sprint are reviewed with stakeholders. These are given go/no-go votes and in/out of scope decisions on specific feature details. Estimated epics that were given a “go” will go into priority for story writing and sizing, while fully sized epics receiving a “go” are ready to be prioritized for commitment.

3.New epic prioritization—The sponsor of a new epic presents the definition and its business value. At the end of this meeting, new epics are voted on and are prioritized in the epic backlog.

imageDay Two, Agile planning session—The technology/program management team reviews the epic backlog for new ones and progress on epics where there is planning underway. Team leaders (often tech leads, business analysts, and QA leads—but that depends on your organization) commit to epic estimation, sizing, and story writing.

imageDays Three–Four, Team leaders work on planningFor epics in agile planning, team leaders develop multiple scenarios (options) to fulfill the requirements. At least one scenario should represent a “minimal feature set.”

imageDay Five, Architectural review sessionTeam leaders present their story stubs, estimates, and assumptions to architectural and program leads. For each epic, the architecture team makes the following decisions:

imageReady for solution review—The review is used to decide whether stubs and estimates developed fully represent the requirements and technical dependencies of the epic. The team decides which of the scenarios should be presented at the stakeholder session and which to drop. This team also considers what epic elements should be reviewed for in/out of scope.

imageAdditional estimation required—The team can force an epic for a second round of planning if the story stubs don’t represent an end-to-end implementation, if the estimates are challenged and need to be reviewed in more detail, if dependencies on other teams need to be considered, or if additional scenarios need to be developed.

In this process, I’m stretching the definition of “sprint” and deliberately selecting a fast one-week cadence. This is because an epic’s planning might not achieve “done” (as in estimation complete) at the end of a week either because its complexity requires more planning or because it does not pass the architecture review. This is by design; complex features that require multiple planning sprints provide feedback to the stakeholder that their ask is complex or challenging. It also forces quick estimates and solutions since some epics will be deprioritized or have reduced scope once estimates and sizes are considered.

AGILE AT BUSINESSWEEK

The process outlined in Figure 2-8 is very close to what we instituted when I was CIO at Businessweek. The steering committee met Mondays. New features were first vetted by the product management team, then presented to the committee for review. We had different stakeholders representing digital strategy, editorial, ad sales, technology, and marketing that would vote on the feature based on its business merits. I voted for technology. The final votes were aggregated into a score that would help prioritize features going into estimation.

At the same meeting, any features that were fully estimated and approved by the architecture group. The solution would be reviewed by the technology leads along with other scenarios that were considered. The voting committee reviewed the solution and would either sanction the feature for further development, ask for refinements, or suggest removing the feature from consideration.

The product and technology groups met Tuesday and worked backward. They would start with any features that had already been estimated and approved for development and review the status of story sizing. It’s important that these epics get reviewed at top priority to ensure that the business analysts and technical leads were focused first on getting stories written and sized before going on to review any new epics that required estimation.

Once done, a review of epics already in the estimation process aggregated with any new epics voted on by the committee. These would be reviewed in voting rank order, so a new epic could trump planning activities for epics that had some work completed in a previous planning sprint.

Keep in mind that while the agile planning sprint is one week, the commitment isn’t the same as in agile development where the team is expected to complete stories, pass all acceptance criteria, and have shippable code at its end. In agile planning, the team commits to working on estimating, sizing, or documenting the requirements of the epic, but there isn’t a commitment to have it fully completed by the end of the week. The list of epics also includes ones that were planned but that the architecture review committee did not approve or requested additional technical considerations for. The product owner, along with the delivery manager, sets the tone for balancing effort between estimating, sizing, and story writing activities.

At the end of the week, the architecture review committee reviewed the estimated and sized epics. They would have authority to challenge and request more scenarios, determine that a scenario was not fully thought through and missing implementation details, or challenge the technical lead on the estimates. Ultimately, their job was either to pass an estimate and recommend a scenario or to reject the estimate and request additional work. Accepted estimates were then prepared to be reviewed at the following Monday’s business stakeholder session.

The process worked exceptionally well. We had three independent programs running with this level of review happening across them every week. It led to the redesign of Businessweek’s homepage, the conversion of a legacy content management system (CMS), and the development of new social, mobile, and data products.

ALIGNING AGILE WITH ARCHITECTURE

I’ll close this section on agile with a review of how agile teams that leverage planning practices can still align with longer-term architecture plans or product roadmaps.

Figure 2-9 shows the agile planning cycle we just reviewed for a single epic that is broken down to two scenarios. One scenario is approved by both the architecture and steering groups for story writing and ultimately for development.

image

Figure 2-9. Instituting scenarios and architecture reviews in the agile planning process

Why is one scenario selected over the other? This comes down to how well the team articulates the tradeoffs from one approach to another. In the workflow, the architectural differences are reviewed that might show that one scenario introduces additional technical debt, has reliance on a legacy system that will be deprecated, or has additional performance considerations.

But the product owner can make similar reviews for roadmap considerations. Is one solution better aligned to customer need, the product roadmap, or other strategic consideration? This too can aid in selecting an optimal scenario.

It’s the balance of agile development that is focused on short-term execution and agile planning that provides a framework to plan releases and roadmaps that makes agile practices transformational. It gives leaders the ability to change course on a short-term basis based on customer feedback or to better steer the ship based on strategic direction. It naturally forces a dialogue and alignment between stakeholders, decision makers, and executing teams. It also creates a transparency that enables process improvement and talent review.

Aligning SDLC to Agile—What Is Your MVP (Minimal Viable Practice)?

While agile practices will help you organize a team and align to a process, it’s only when the process includes software development lifecycle (SDLC) practices that it can ensure successful delivery of “working software” at the end of every sprint. I have found that three key technical practices lead to high-quality software that are also prerequisites to aligning delivery and operational teams (DevOps, which I will cover in the next chapter). These technical practices are establishing version control on all application artifacts, aligning a quality assurance practice, and managing technical debt. The next three sections provide details.

Version Control

I learn most about version control when evaluating an organization’s technology practices during due diligence in a merger or acquisition. I’ll start by asking whether the team is using version control, and I always get the same answer, “Yes.” But after further questions, I learn the reality that there are multiple version control tools and repositories, that not all artifacts are checked into the repositories, that software releases are packaged independent of the repositories, and that there is very little automation connecting builds, tests, and deployments.

It’s ridiculous. Developing applications and software needs to be done with basic tools and frameworks. Have you seen a construction crew building a multilevel without scaffolding and other safety constructs? Developing software without basic version control practices adds significant risk that can make it difficult for teams to collaborate, respond to production issues, or work through complex technology upgrades.

Many more organizations today develop proprietary software but are not software businesses and don’t know how to implement version control properly. Technology teams in these organizations may be given little time to take steps to set up version control and other development practices. In addition, much of today’s development is happening across a multitude of technologies deployed to both data center and Cloud environments. It might be easy to check Java code into Git or SVN, but the tools in your BI or CMS tool may not make it easy to check into the same repository. It is also likely that Cloud and SaaS tools may have completely independent versioning tools.

Yet I can’t escape the disciplines that made me a strong developer, team, leader, and Chief Technology Officer. Code was checked in daily with comments. Releases were tagged in version control. Builds were automated and archived. Branches were possible and executed on every time an upgrade was scheduled. Code reviews were triggered when developers checked in code. Code was also tagged to the story, so that testers can click and see exactly what code was modified. When automated unit testing became available, these tests were triggered with every build. Deployments were scripted, and automated pushes to staging environments made the latest version available for internal testing and configuring.

This may sound like science fiction to business and development teams that struggle to keep up with demanding priorities. And, yes, some of this isn’t easy to implement without some investment in time and skill, but I would strongly suggest that there be an MVP for these practices to ensure that teams don’t create risky development environments that might materially impact a multiyear transformational program.

Here are some basic practices every development team should be able to implement without too much skill or time commitment:

imageYour repository should have all the code and configuration. It needs to have everything that’s changing with the release. That means database scripts to implement and back out changes, workbook files from BI tools, testing scripts, release notes and other documentation, configuration files, build scripts, and so on. Everything one needs to build, test, and deploy the software should be in the repository.

imageAll developers need their own accounts to the repositories and should be checking their code in daily with comments on what was changed.

imageAll releases to production should be packaged from assets in your version control repository—no exceptions, even if the build is done manually.

imageAll releases need to be versioned. Come up with a simple governance on how version numbers are assigned. Learn how to “tag”’ assets in the version control system with the release number and archive the full package of the release in a separate folder.

imageBack up your code repository frequently and have a backup (DR) system. You can’t afford to have the whole development team out of commission if there is a failure.

imageLearn how to connect your development tools (code editor, agile backlog) to your repository. At minimum, find efficiencies in using this integration.

Quality Assurance and Testing

If you’re practicing agile development without QA team members, a reasonably defined testing process, and sufficient criteria to help define “done,” then at some point your development process will go off the cliff of complexity. The size of your development team, the number of technologies used in the development stack, and the business criticality of the applications are all quality criticality indicators. They shape how much runway IT has before quality factors drive material business risk that can easily overwhelm the development and operations teams. At some point, the development team is going to make application changes where having defined test plans helps avoid issues that might impact users, customers, data quality, security, or performance.

WHY A QA PRACTICE IS CRITICAL TO LONG-TERM SUCCESS

Let’s look at some basic concepts that point to why QA is so critical to businesses that rely on technology.

As applications are developed with more features and capabilities, more functions need testing. Every agile sprint increases the testing landscape, and if the discipline to define test cases and establish regression tests isn’t developed in parallel, the backlog to create test scripts becomes too hard and long to execute on afterward. If a development team is adding functions and then breaking others, it’s likely because regression testing isn’t in place to catch these issues.

Applications are more complex involving multitier architectures, multiple databases, transactions spanning multiple APIs, and computing performed on multizone Cloud environments. If you aren’t building unit performance tests along the way, then identifying a bottleneck can be a lengthy, painful process when it emerges.

The application may pass all functionality tests, but the data and calculations can be wrong. Many applications today are data driven, enabling their users to make better data-driven decisions. Functional testing isn’t sufficient because if the data is wrong, the entire value to the end user is compromised. But, without a defined test strategy, it is hard to test data, validate calculations, understand boundary conditions, and ensure that aggregations or statistic calculations are valid.

A failed application security test may also be costly to fix. Worse is a security failure in production that could have been avoided with security practices entrenched in the development practice. If you’re not testing for security issues, then it’s unlikely the development team is applying basic security design principles into developed applications.

Finally, today’s user experiences span phones, tablets, laptops, IoT devices with different browsers and plugins. Ensuring that user interfaces are functioning and that the user experience is optimized across all these modalities has never been easy; however, customers have minimal patience for mediocre or clumsy experiences.

ALIGNING THE DEVELOPMENT AND TESTING PRACTICES

Many developers don’t know how to work with testers and don’t have the experience on how to best enable the testers to do their job efficiently. They may finish the development work and throw the completed code over to the testers late in the sprint cycle. When testers are done, they throw any defects back over to the development team to fix. This may volley back and forth until the team hits issues completing the sprint on time.

A testing charter should document any business requirements, the overall risks of the project, and goals for testing. This document should exist for the application and be adjusted for the specifics for each release. The last step is to outline developer and testing responsibilities, as shown in Figure 2-10.

image

image

Figure 2-10. QA test types, risk remediation, responsibilities between development and QA

This matrix is generic and simplified, so you should take steps to align details to your business requirements. Take steps to clearly differentiate development and QA responsibilities and ensure that the team acknowledge their mutual dependencies.

Next, review the sprint schedule and see how to best align the full team’s efforts. For example, in a 2-week sprint, I recommend developers complete coding by day seven and leave the final three days for collaborative testing, fixing, and improvements. This changes commitment and team velocity because the team now must consider the effort and time required to develop in fewer days and account for testing. Using the last couple of days of the sprint as a lockdown period also enables the team to look a sprint ahead to evaluate acceptance criteria and size stories appropriately.

Ask teams to have a dialog on which stories they can finish earlier in the sprint cycle. When teams complete a portion of the stories earlier, it allows testing to start earlier and enables the team to have more reliable sprints.

My history working with teams is that, once there are alignment on objectives, practice definitions, and alignment on roles and responsibilities, then they are more likely to collaborate. This applies to the advanced and novice teams that both need to adopt their approaches when working together toward a common goal.

WHY DO MANY ORGANIZATIONS UNDERINVEST IN QA?

With so many things that can go wrong, why do many enterprises and the CIOs that lead them underinvest in QA? The workflow may be broken. The data may be wrong. The user interface may look broken on an Android phone. The performance may be degrading over time. There may be a security hole just waiting for someone to exploit it. There is likely loss of revenue if there is an outage.

There are a couple of fundamental reasons. First, in an effort to keep development costs low, executives mistakenly believe that a team can be more productive by adding programmers rather than testers. This is a flawed assumption because most developers make poor testers. Yes, they should be implementing unit tests to ensure they are not releasing defective code. But when you factor in the other testing responsibilities related to security, regression, automation, performance, browser, and device, most developers don’t have the skill, the time, or the interest to execute these disciplines.

When you rely on developers to do the full end-to-end testing, I find that the resulting practice evolves into one of two situations. If your development team isn’t skilled with the practices and tools to automate testing, then testing quality suffers, and it simply doesn’t happen with any regularity or consistency. The result is compounded over time depending on the gaps of what you should be testing and your ability to execute. What happens next is that developers spend more of their effort fixing things, and executives get angered over the lack of quality. It can lead to a vicious cycle of blame and mistrust.

I’ve also seen the other extreme where developers make significant investment to develop unit tests, automate them, and promote a zero-defect culture. Even if you have developers that exhibit this mindset, they may not have mastery of all the underlying technologies used to test security, performance, and devices. They are unlikely to have a deep understanding of how to evaluate risk and translate them to a quality assurance program that aims to mitigate risks.

But even in the best of scenarios where your development team has all the practice knowledge and skills to execute on QA program, you should ask yourself and the team whether this is the best use of their time. Is it better for the developers to spend a sizable portion of their day solving the testing challenges, or would you rather see them interacting with the product owner, developing solutions, and driving business results? Should testing be a core competency of your developers, or should you be surrounding them with these skills to enable them to be more productive in areas where they have unique skills to move the business?

So, problem number one is to make sure that executives and developers recognize the need for testing and that it is a separate, distinctive skill set from solutioning and developing the application.

The second fundamental reason businesses underinvest in QA is that they confuse QA with user acceptance testing (UAT). The belief is that the developers build the application and that they should bring in business users to validate, provide feedback, and ultimately sign off on the changes. Why invest in QA when business users are being asked to test anyway?

image

Figure 2-11. Difference between QA and UAT by testing disciplines

Figure 2-11 illustrates the differences in some of the responsibilities between QA and UAT. You can see that there is a sharp contrast in responsibility and perspective. UAT should be designed to provide feedback on the product or the application from the user’s perspective. It aims to answer questions like, “Did we build the right thing?” or “Did we get a requirement wrong?” or “Are users using the application differently than anticipated?” or, possibly, “Are we considering the wrong data sets when evaluating the data accuracy, quality, or visual relevance?”

If you rely on UAT without QA and business users or, worse, customers are doing the testing, then you’re likely to run into a couple of issues. First, business users aren’t skilled at testing. They don’t know how to break things or how to automate tests that can consider many variable conditions through a workflow. You are also likely to run into business user fatigue. Ask them to test once or twice, and you’ll likely get their participation, but requiring them to test repeatedly while you make fixes and improvements will frustrate them. You can’t expect them to test things with rigor through multiple iterations.

This is where QA comes in. QA must assure the product against a known set of tests. They should be skilled at evaluating risk and looking at methods and tools to mitigate them through testing. The testing is often broken down to different types of testing that require different skills and tools. They need to be able to work with the development team to design tests for stories acceptance criteria, then to automate these into regression tests that ideally run with every build and release. They are there to make sure you are not passing on something “broken” to the business users for their evaluation.

So your second issue compounds the first one. First, executives need to understand and value the role of quality assurance and testing. Second, they should recognize that it’s a different set of skills versus development and that surrounding them with good testers will make them more productive. Finally, they should understand the significant limitations involved with having end users test.

If you’re getting push back from executives on the investment needed for testing, my advice is to take a step back and educate them on some of these principles. Part of being a transformational leader is recognizing when leaders or members in the organization don’t fully understand or embrace a key tenet of driving digital, and this must be addressed. Once explained, it’s important to follow up with metrics or KPIs that demonstrate the contribution and effectiveness of the program.

Managing Technical Debt

If you haven’t heard this term before, then I would suggest doing some research to get all the details. Technical debt is a form of accounting for bad design, bad code, or technical areas of improvement. It’s a technical to-do created by technical team members to announce that they might have cut corners to get a story done or recognize there is a better way to implement something that might be required in the future.

As a transformational leader, you want to encourage teams to acknowledge technical debt. First, it implies that the team doesn’t have to design something perfectly on day one and should execute best judgment on targeting a minimal and viable implementation. Second, it means that they have a process to itemize things that need to be fixed, some sooner than later. Once itemized and ideally prioritized, it gives you the opportunity to prioritize remediation or to analyze for cause and solution to technical debt.

But here’s the problem. In agile, the priorities of the team are set by the product owner. What if the product owner doesn’t care about technical debt? What if she only cares about working on the next feature or improvement and doesn’t encourage the team to itemize technical debt or prioritize the improvements? How do you get the product owner to prioritize work on technical debt? How do you get business stakeholders to be interested in seeing performance improvements, security testing, error logging, automated data processing, code commenting, class refactoring, automated testing, platform upgrading, and in addressing other technical debt? It’s the most common question I get when discussing agile development with a team of developers that are enthusiastic about maturing their practice.

The simple answer is, without proper culture, incentives, or rules, most product owners will underinvest in fixing technical debt. There are exceptions, but in leading agile transformations, I can tell you that the pressure on product owners is simply too great, and they will almost always drop priorities on technical debt to fulfill stakeholder needs.

The simple way to get technical debt prioritized is for the transformational leader to step in and establish governance requiring a significant percentage of effort applied to this need. I usually set this at 30%, but it can be as high as 40% for complex architectures, legacy applications with significant risk, or mission-critical applications. It can be a lot lower for simple applications.

My rationale is that software companies typically charge 20–30% of license costs to provide support and maintenance. That’s for software organizations selling their software, and the typical organization or enterprise isn’t going to be as efficient, hence the 30–40% benchmark.

PRIORITIZING TECHNICAL DEBT

Here’s the process I endorse:

1.Technical leads and their teams are responsible for itemizing technical debt stories on the backlog at the end of every sprint.

2.Delivery managers or architects are responsible for prioritizing the technical debt and should articulate their rationale for the prioritization.

3.Here’s where there’s room for a lot of collaboration. The product owner can get some of her needs accomplished within the scope of technical debt if she can articulate business needs as part of technical debt stories. The technical lead can also get more technical debt addressed if it is bundled in properly selected, functionality driven (nontechnical debt) stories. It can then be a win–win when the product owner and technical lead partner on what, when, and how to address technical debt.

4.The product owner and technical lead abide by the agreed-on governance principles to ensure the sprint-level technical debt percentage is addressed. The sprint-level technical debt target should be 5–10% lower than the total technical debt target.

5.The remaining technical debt should be dedicated to one or two technical debt releases per year to address larger system or platform upgrades.

6.Smarter teams will prioritize more technical debt at the start of a multisprint release and do less of this work near the end of the release cycle when the product owner feels more pressure to add more functionality to the release.

Release Lifecycle

Successful sprints get you from point A to point B, but it’s only when you release new technology into production environments that you start demonstrating business value. Unfortunately, many organizations look at release planning as the technical communication and collaboration required to move an application into production. In agile delivery, release planning starts a lot earlier, even before the first development sprint. Release planning starts as soon as there is a conversation about the theme for the release, target delivery dates, and an agreement to start work. It’s from that point on that teams need to align on scope, quality, dates, and costs and communication with stakeholders’ needs to track with ongoing activities.

In this section, I’m going to focus on classic release planning, which targets releases at the end of one or more sprints. Over the last few years, some organizations have matured their business, development, and operational practices to enable a more continuous delivery cycle. That level of automation and sophistication may be appropriate for organizations that develop a lot of software and gain significant competitive advantages by releasing frequently. For other organizations, especially ones developing applications requiring end user training on workflows or others that are tightly integrated with other applications, a scheduled release plan may be more appropriate.

Stages Release Lifecycle

image

Figure 2-12. Release process illustrating major steps and communication stages

Figure 2-12 shows the release lifecycle with the following stages:

imageRelease planning takes place before a release is formally scheduled and the team is debating the type of release, timeline, and target scope.

imageAgile development is when the bulk of the work occurs.

imageRelease delivery is where the release is completed and the team is taking steps to finalize testing, oversee UAT, communicate with customers, and follow change management practices.

imageMonitoring, feedback begins after the first deployment to customers when the team is monitoring behavior through metrics captured from the application or feedback collected directly from the customer for any issues.

You’ll notice that at the end of every transition, formal communications are required to keep stakeholders, customers, and other members of the organization knowledgeable on status. Let’s first review roles and responsibilities during these stages, as shown in Figure 2-13.

image

Figure 2-13. Primary responsibilities in release planning

The release cycle represents a cadence of delivering new capabilities and other enhancements to end customers and users. To enable this, team leaders take on different responsibilities depending on stage. Teams are most familiar with the agile development stage where the bulk of the work is being taken on. My experience is that even mature agile teams struggle with the early and later stages, so let’s do a brief review of the cycle.

Teams should be reviewing priorities every sprint, developing solutions, and estimating. Most of this work is to enable the first sprints of the release coming after the one in development, so if the team is in the agile development stage for release version 1.5, the estimating activities should be to prepare for release version 1.6. By the time the team is ready to start release planning for 1.6, the backlog should have several epics already estimated and ready for the product owner to prioritize.

My assumption is that your development sprints are two or three weeks long and that releases are one to four sprints in duration. Under these constraints, teams should have fully estimated (but not necessarily sized) releases before they start development work, and that estimating work in the current release is either for epics being scheduled for subsequent releases or are for new epics that the product owner elects to swap into the current release.

Ideally, the release planning stage should be very short. This is a formal step for the product owner to review estimates, consider customer feedback, and finalize the theme and priorities for the release. The technical lead should be doing the same for technical debt, and the team should work together to align on overall priorities. As they begin to set them, the business analyst can begin scheduling stubs, stories, and estimating activities into sprints based on the velocity of the team.

How does one develop and schedule stories into the release? Let’s detail this out:

1.The delivery manager should have standards for how many sprints go into a release. Some products and teams operate on a more continuous cycle and release after every sprint, while others plan for major and minor releases of different sprint lengths. The number of sprints selected for each release depends on the nature of the product and the maturity of the team’s technology practice. I’ve found that teams working on consumer products are asked to release frequently and ideally every sprint, while those working on business products can target monthly, quarterly, or even biannual release cycles. Teams that target more frequent releases require more automation in their testing and release steps, while those with less frequent releases can afford to have less automation.

2.Product owners and tech leaders should stick to the release types standardized by the delivery manager and first must select the type of cycle they are targeting for the release if there are options. One of my teams flipped between major and minor release cycles to enable deploying small fixes and enhancements quickly after a major release was completed.

3.Once selected, the team now knows how many sprints they are working with in their agile development stage.

4.The other variable the team needs to agree on is their velocity—how many points of stories they are going to commit to each sprint. I like using a moving average of the actual velocity of completed stories over the last three sprints.

5.I recommend filling the release schedule with a declining percentage of the peak velocity. So, for example, if the last 3-sprint average velocity was 100 points, I would schedule 90 points of story in the first sprint of the release, 80 in the second, 70 in the third, and so on. This leaves room to add stories during the release period as unknown or new requirements materialize.

6.With that capacity set, the business analyst should schedule stories (or stubs) into sprints based on the priorities set by the product owner and any sequencing set by the technical lead with a couple of guidelines.

a. Try to get the highest-value or high-risk (and at least medium-value) stories in earlier in the release cycle.

b. Make sure to schedule in technical debt stories during this exercise.

7.Review with the team to finalize.

Release Cycle Antipatterns and What to Do About Them

The release cycle may sound easy, but it is hard to execute to in practice. Problems occur from at least three conditions: (1) poor execution in the current release, (2) inadequate planning for the next release, (3) the emergence of significant customer feedback that disrupts targeted priorities. Let’s review these scenarios as they can be impediments to executing a transformational agenda.

For this section, let’s assume the team just finished release 1.0 and is getting ready to start work on release 1.1.

POOR EXECUTION

Poor execution implies that the team released a product with defects and the product owner needs “quick” fixes before going on to what was scheduled for release 1.1. The result is that instead of the team starting work on release 1.1, they are forced to put out one or more patch releases like release 1.0.1, 1.0.2, and so on. The obvious impact is a delay to the schedule of releases plus whatever dissatisfaction the issues create with customers.

Larger development teams can manage the risk in this situation by carving out small support teams to handle patch releases and other support issues. If version control is used properly, these teams can branch the code, work on the fixes, and go through a release cycle with minimal disruption to the development team, which can go onto release 1.1.

Managing this risk in smaller teams is a lot harder. In fact, I’ve seen many product owners try to get the patches in while keeping release 1.1 on schedule. The best these teams can do is, if they know they are likely to have to release patches, they can build it into their schedule and delay the start of release 1.1 to accommodate.

Neither of these options addresses the root cause, so it’s important to have release retrospectives to help teams discuss, identify issues, and seek out solutions.

Poor execution can come from many factors such as ill defined customer expectations, inadequate testing, or pressure to meet tough deadlines that can lead to pushing a release that has quality issues. When the team and product owner meet in a retrospective to discuss these issues, they should document them along with recommendations. They should then sit down with leadership teams starting with the delivery to discuss which remediation is endorsed. Process improvements don’t come for free, so if business leaders recognize there are material issues in defective releases they will often be receptive to fixing them.

INADEQUATE PLANNING

A second issue occurs when the team is so focused on release 1.0 that they don’t spend adequate time prioritizing, stubbing, and estimating for release 1.1. The result is that when they get release 1.0 to production, they are obliged to invest the time to do this planning and effectively delay the start of development on release 1.1. The problem here is that most of the development team is not involved in this planning, which is largely the responsibility of the product owner, technical lead, and business analyst. The rest of the team is left underutilized until the details of release 1.1 are sorted out.

This situation can occur for a few different reasons. The most common reason is with new teams or team members that may not fully understand the planning practices. They are too consumed working on the current release that they don’t allocate time to plan their backlog for future releases. The best remediation for this issue is coaching and monitoring either by the program manager, by the delivery manager, or by a hired agile coach. They need to pull the leadership of this team aside with regular frequency to make sure they dedicate time to future planning.

Another issue is if the team underestimates the work for the current release or is having trouble with code quality or other defects coming from the development team. Both have the same impact, especially on the technical lead. Instead of having time to focus on planning, the technical lead spends efforts getting hands-on either to keep the team on track or to address technical issues created by the team.

Neither of these options is sustainable. If the technical lead is too hands-on solving challenging technical stories, then planning will suffer. If the development team is performing poorly, then having the technical lead step in will also derail planning. It’s better to recognize these issues and address them directly rather than let planning fall behind. Releases that have too many technical hurdles should be descoped and simplified rather than tax the technical lead with too much hands-on activity. Development teams that are underperforming must be addressed. Again, it’s the role of the delivery manager to recognize these symptoms and take the right prescriptive actions to address the issues.

The other issue occurs if the product owner falls behind. Once releases are out to market, their job becomes triply complicated because they need to manage the current release, plan the next one, and collect feedback on the releases that are already out to market. This can be overwhelming to junior product owners, and planning tends to come in third place versus these other needs.

This is a classic time management issue, and product owners need appropriate management or coaching to recognize and provide advice. Product owners need education, and sometimes some support from more senior leaders on how to balance their time and cutoff involvement on time-consuming issues.

CUSTOMER FEEDBACK PRIORITY AND VISION AMENDMENTS

Getting customer feedback must be viewed as a good thing even if the feedback is negative. At least the team knows the issues or areas of improvement and can reassess their priorities to address them.

But that’s not always the case. The team may be excited to work on what was prioritized for release 1.1, and shifting priorities may create angst within the team. In addition, if multiple stakeholders are involved in the program, stakeholders that have been given priority for work in release 1.1 may voice opposition if their deliverables are delayed and if they don’t agree that addressing customer feedback has higher priority.

So my first recommendation is that business teams create a defined way on how they process customer feedback. Organizations that are sales driven or are highly sensitive to customer issues are more likely to derail strategic priorities (release 1.1) to address customer feedback. Organizations that are highly customer driven can find themselves chasing feedback and almost never getting to their strategic needs. A vetting process that sorts out “improve immediately” versus “capture and decide later,” along with other considerations, is needed in this situation.

The other option is to plan for it. My team at McGraw-Hill Construction did just that and scheduled “minor” releases after every major release to accommodate customer-driven improvements and other fixes. At first, it appeared less desirable because it elongated the timeline to complete a complex roadmap, but, in the end, it had many practical benefits. Improvements were made in the minor release, which was capped in duration. The allocation forced product owners to prioritize customer feedback and ensure the most important issues were addressed. It also gave teams added time to plan release 1.1 without overtaxing their time in release 1.0 on execution. But this approach does come at a cost, and the delivery manager needs to make sure senior leadership understands this strategy.

Continuous Integration and Continuous Delivery

Once you have sprints and release management practices in place, the next level of maturity is to look at releasing changes more easily and frequently. If you can release easily, then you have the option to deploy small improvements or A/B-test new ideas. Product owners often favor the option to release frequently, with the caveat that it doesn’t introduce risk or slow down productivity.

Two disciplines enable more frequent, inexpensive, low-risk release cycles. The first, called continuous integration, is when the entire software build and test pipeline is fully automated so that new builds can be triggered by events or scheduled. Imagine developers checking in code, testers checking in new testing scripts, and a build with all the application components integrated and tested generated at the end of the day or more frequently. At minimum, this build flags the team whether it “passed” or “failed” and enables them to address issues long before release timelines become a risk.

Continuous delivery takes this process one step further and enables the deployment of new builds to computing environments. Some organizations will implement continuous delivery to staging environments so that internal subject matter experts can review changes or perform user acceptance testing on the latest build.

Other organizations will take this one step further and use continuous delivery to push production builds out with greater frequency. They can do this by moving to a single sprint release cycle, then look to shorten the duration of sprints. Teams must have significant confidence in their testing capabilities to perform production releases with this frequency. Frequent production releases work best when application changes have low impact on users, such as SaaS platforms that enable new turn-on-when-needed capabilities. On the other hand, frequently releasing changes to applications with many users performing important tasks is a lot more challenging.

While continuous integration and delivery are highly desirable, they are not optimal in every business situation. It takes a lot more skill, maturity, and expertise in new tools in the development team to implement. It also needs a strong partnership between development and operations teams to technically enable continuous delivery and for business teams to manage this rate of change.

At minimum, these DevOps practices are good targets for teams so that they develop more automation. Continuous integration should be a goal for mature development teams working on new applications but may be more challenging on legacy applications, applications that have regulatory requirements, or teams that are new to agile delivery. On the other hand, the business may be competing with new entrants that deploy new capabilities very frequently. If this becomes a competitive advantage to them, then implementing these DevOps practices should be a higher priority in the transformation program.

Transformational Improvements Through Agile

So, with this framework, you now know how to deliver transformational improvements. Let’s recap:

1.Define the governance regarding team structure, roles, responsibilities, tools, sprint length, release types, and estimation guidelines.

2.Leverage agile planning to enable the development of roadmaps. Use agile practices to execute in a sprint, then use the agile planning practices to get visibility into future sprints.

3.Ensure that you are developing your technical and testing practices in parallel inasmuch as agile execution helps only to the extent that there is an underlying strong technical practice.

4.Formalize release plans to align agile teams with customers and business stakeholders.

What you’ve just given your team is a working process to make technological improvements based on strategic priorities and adjusted for customer feedback. It enables the team to prioritize based on value, complexity, and cost, and leverages the team’s expertise to consider multiple solutions to any given challenge. It’s structured in its definition of roles, responsibilities, and timelines but flexible in how it’s applied to different organizational structures, geographical distributions, and other challenges.

But agile is a practice, and it by itself isn’t going to make the IT team fully capable to solve transformational challenges. With a practice defined, we now need to turn toward the underlying guts of the technology itself and how the business operates.