© Springer Nature Switzerland AG 2019
Steven M. StoneDigitally DeafManagement for Professionalshttps://doi.org/10.1007/978-3-030-01833-7_7

7. IT Equals “It’s Tough”

Steven M. Stone1  
(1)
NSU Technologies, LLC, Denver, NC, USA
 
 
Steven M. Stone

Abstract

Digital transformation significantly increases the pressure of IT organizations to deliver. Traditional IT approaches and organizations will struggle to deliver at the pace needed to ensure success in digital transformation. Stone discusses the pressures facing IT organizations and the relevant changes needed to enable the agility and responsiveness demanded by businesses. Stone reviews how modern technology tools, techniques, processes, and methods must be deployed to enable the speed and agility of IT organizations. He wraps up this discussion by emphasizing the importance of developing strong analytics and data competencies to form the core of a digital technology strategy.

7.1 The Changing Role of Corporate IT

By this point in the book, I am hoping you have realized (if you hadn’t beforehand) that digital transformation is complicated and challenging. At the same time, I want to continue to stress; it is attainable given the right level of support and focus. For those that succeed, the rewards are greater productivity, speed, agility, continued or elevated profitability, and improved connections (and relationships) with customers.

I have continually stressed digital transformation requires substantial work from business organizations and IT alike. As we said on multiple occasions in this book, the path to digital transformation is not accomplished merely with technology. It requires, among other things, cultural change, organizational alignment, structural modifications, role restructuring, process re-invention, and revamping customer interactions. To my brethren in the IT organization, this is the good part of the story. You won’t be alone on this journey. All parts of the organization must be committed for digital transformation to be a success.

However, this is where the good news ends for IT. The journey to digital transformation will not follow familiar paths for the IT organization. In fact, I will argue that the transformation that must occur to IT processes and controls is a more significant effort than deploying the technologies supporting the core of the business digital transformation.

Why would that be? What changes do IT organizations need to implement in a digitally transformed organization? To understand this, we need to start by understanding the structure and challenges of existing IT organizations.

Throughout the two and a half decades beginning with 1990, we have seen various evolutions within the IT organization. We have seen engineering and operations migrate from multi-purpose engineers (from the monolithic platform days) to specialists in areas such as networking, storage, middleware, and operating systems. We have seen this same divergence of skills begin to come back together as we preach the power of “converged” infrastructures.

We have seen the emergence of standardization driven by the Information Technology Infrastructure Library (ITIL) detailing the standardized services and associated best practices/processes to be delivered by an IT organization. We have seen elevated levels of control driven by the Sarbanes Oxley (SOX) Act, Payment Card Industry (PCI), Health Insurance Portability & Accountability (HIPPA), Federal Information Security Management Act (FISMA), National Institute of Standards & Technology (NIST), FedRAMP, General Data Protection Regulation (GDPR) and many others.

We have seen the emergence of client-server computing, that was replaced by web-based architectures, which were supported by object-oriented, and later evolved into services-oriented architectures.

As we mention web-based, the world-wide-web (WWW) was born in 1989. In 1990, very few companies even knew what the acronym meant, let alone the promise of WWW technology. The growth of the Internet drove more and more demand for faster and faster networks. In the early 1990s the standard network capacity was 10 Mbps. Today, many large organizations have network capacity of 10,000 Mbps (also known as 10 Gigabit Ethernet).

The new networks provide speed and capacity enabling an entirely new form of computing known as the Cloud. It no longer matters where processing takes place. With high-speed networks and the Cloud, users can achieve similar performance whether they are in the same building as the servers or another location across the country.

We have seen database technologies evolve from purely transactional to repositories for unimaginable quantities of data. We have seen basic queries against these databases migrate to advanced analytics and ultimately to cognitive computing and machine learning.

We have watched in amazement as immense computing power shrunk in size to the point that a device in our pocket today has more power than massive, room-sized, computers only 20 years prior.

We have watched the emergence of an unbelievably powerful wireless network that has propelled a new wave of mobile devices and applications unimaginable only 10–12 years ago.

Finally, we have observed innovative companies such as Google, Amazon, and Facebook rewrite what is possible in terms of high availability in data centers.

Each of these advances led to changes within the traditional IT organization. Specialty groups emerged as we searched for new ways to support business demands for the new technology.

However, for many companies, the changes haven’t been enough. Technology life cycles have become much shorter requiring constant technology refreshes by IT organizations. In addition, company business cycles are also shrinking putting immense pressure on IT to deliver new capabilities in shorter time windows than ever before.

IT organizations have struggled to keep pace with rising levels of demand from the business and to adapt to new, more complex technology architectures. Consequently, project backlogs have grown, as have backlogs for maintenance and smaller enhancements. The daunting backlogs often leave IT organizations feeling somewhat helpless against a rising tide of demand. Unfortunately, this erodes the business’ confidence in the IT organization’s ability to deliver.

IT is not blameless in this equation. In many cases, IT’s insistence to maintain a high degree of control over all technology requests heightens the backlog volume. In this scenario, an increasing number of demand requests funnel into IT. IT must review the request, determine the extent of the change, and then place it in a queue based on a priority assigned (hopefully) by the business. Predictably, a change that could be performed in a few hours waits in the queue for weeks or months before being addressed.

As a result, the business begins to look for ways to avoid using IT. This avoidance may come in the form of the business building ad-hoc applications using Excel, Access, or some other desktop productivity tool. In other cases, the business may choose to procure a cloud service or even consider leveraging a third-party service provider for their technology needs.

I can recall numerous discussions with business executives from companies over the years. When the subject turned to their IT organization, they would scow or frown and say something like “Our IT group just doesn’t get it. They take forever to deliver even small changes. But I am sure it is nothing like that at your company is it?”

While I would love to believe the last part of that statement, I am supremely confident, the business executives at any of my former companies said similar things. In fact, in my 34 years, I have never heard a single business executive exclaim: “My IT group is too fast and doesn’t spend enough money.”

Part of the problem lies in the nature of the relationship between the business and IT. IT is tasked with providing technology solutions for the business. To accomplish this task, IT must balance organization priorities, regulatory requirements, operational risk, security, and scare resources. The business doesn’t always see or appreciate this balancing act.

Taking a step back, it is easy to see both sides of this argument. As a result, in most organizations today, there is a growing set of applications that were not built and are not supported by IT. While this is not optimal by any means, it is manageable given the right set of guidelines and policies. However, when the relationship between the business and IT becomes more strained, it gives rise to users circumventing IT policies and processes. The worst of these users I have come to call “back seat technologists.”

I have encountered my share of back seat technologists at various stages of my career. These are very outspoken critics of IT and are skilled at getting technology solutions completed outside of the IT organization. While their intentions may be good (support the business), their methods are often contradictory to IT controls. As a result, in many cases, the solutions they build or sponsor do not adhere to a supported IT architecture or security standards. To bring these “one-off” solutions under IT control typically requires significant rework.

Many of the back seat technologists believe that technology is as simple as the desktop tools that they work with on a day-to-day basis. They berate IT for making things “too complicated.” When things go wrong on a technology project, they are quick to point the finger at IT.

Unfortunately, often these back seat technologists carry a substantial amount of influence in their respective organizations. As they have a reputation inside their respective functions for “getting things done,” executives often listen when back seat technologists complain of issues in IT. Is it any wonder the tenure of a typical CIO in 2017 is still less than 4 years?

In some companies the business functions begin building pseudo-technology capabilities, often referred to as “Shadow IT.” These groups typically use desktop tools to build out an array of solutions to support business processes. In many cases, the tools are quite amazing. I have seen Excel and Access solutions coming out of these groups that would rival technology companies.

I worked in retail technology for over 23 years. During that time I recall a multitude of times where I encountered these types of solutions. One that still sticks in my mind is watching a demo from one of our business groups on a new set of analytics they were producing to help plan inventory flow across the supply chain. The application was quite impressive for something built in Access. After the demo, they opened the meeting for questions. I possessed a decent understanding of our organization’s data. However, I couldn’t determine where they were getting certain information about the retail items (stock keeping units or SKU’s). I asked, quite innocently, “Where are you pulling information about the items?

The reply was not what I wanted to hear. “Oh, the corporate item master doesn’t have everything we need, so we pull it down and add information to it. Also, sometimes the item master is wrong, so we change it to be correct on our system.”

I bit my lip quite hard and asked a follow-up question. “OK, when you update errors on the item master on your system, do you go back and update it on the true corporate item master?” I knew the answer before the question had left my lips.

“No. That would take too much time.”

For those of you reading that aren’t familiar with retail technology, the absolute core of retail is the item, specifically the item master. If the information about the products you buy, move, and sell is incorrect, well, then you have issues. Needless to say, this conversation spurred much discussion between the business function owner and me.

Another problem with applications developed and supported by Shadow IT is when attrition or reorganization occurs in the business. The Shadow IT applications are often assimilated by IT, in a not so graceful manner.

Even though I have had my share of awkward encounters with backseat technologists and Shadow IT, I understand why they exist. They exist because IT has not been able to keep up. Think about it. If IT delivered everything needed by the business in the timeframes needed, there would be no need for Shadow IT.

7.2 Balancing IT Demand and Capacity

At the core of the problem is there is always more demand from the business than capacity in IT. Demand comes in many forms. Some of it comes from innovations and changes in the business model and the need for technology to support the changes. Some come from changes in regulatory controls. Some come from compliance issues that occur when technologies become unsupported or out of date. Some come from the continual changes in technology and the demand to take advantage of new advances. Of course, some of it is just unplanned; emanating from unforeseen problems, issues, and opportunities.

Deaf Diagnostic

Your IT organization must have mechanisms to collect, categorize, prioritize, and fund demand systemically. Absent these mechanisms it will be challenging to focus resources and align the organization on the effort needed to support digital transformation.

Regardless of the source of demand, it requires IT resources to be able to address the requirements. I am a big proponent of IT resource management. In other words, I want to understand the aggregate demand for IT services and leverage tools to develop resourcing plans. Let me illustrate this point. At one juncture in my career as a CIO, our business had approved demand (approved by the CEO and Steering Committee) that was three times the size of the staff we had in IT to apply to it. We leveraged a structured sourcing plan to try to address this need, but it was challenging to maintain continuity of staffing and project deadlines slipped.

In this instance, the leaders of the organization felt there were driving business needs for the technology requests. IT had to determine the best way to address them. As I used to tell my staff, “Our job is not to stop progress by saying ‘no.’ While ‘no’ may be the right answer, our job is present the business with options. If they accept the cost and risk of a particular option, then it becomes our job to execute.”

It was during this time I realized the old ways just wouldn’t work. Traditional projects, taking 2–3 years to deliver, were virtually out of date before they could be implemented. Change orders on the larger efforts were causing too much disruption and delaying the delivery of business value.

My staff and I looked deeply into how we were operating. We had already reduced the amount of staff working on IT operations (keeping the lights on and systems available) from 65 to 70% to about 40%. However, we still did not have anywhere near the capacity needed to meet business demands.

The chart below illustrates a common problem in organizations with demand exceeding IT Capacity (Fig. 7.1).
../images/469519_1_En_7_Chapter/469519_1_En_7_Fig1_HTML.png
Fig. 7.1

IT demand & capacity

The opportunity gap is aptly named. It represents the unrealized opportunity to innovate or optimize due to lack of resources. IT has taken a number of steps over the years to address this gap including:
  • Operations automation and optimization

  • Outsourcing

  • Staff Augmentation

  • Strategic Sourcing

  • Off-shoring (both external staff and internal)

Even using a combination of all of these efforts and we found we still didn’t have enough capacity to meet all the demand. It was at this time the realization began to sink in. IT was doing everything within their sphere of control to meet demands and wasn’t succeeding. In fact, we found that every time we satisfied demand, we would create more demand. We found that once we delivered a new solution, demand would immediately increase for enhancements and add-ons to the new solution. As a result, as we delivered a steady flow of new solutions, demand did not subside, it actually increased.

In fact, the only exception to this rule that we observed was artificial manipulation of demand. “We just completed a big IT project and do not want to take on another one.” Or, “we are cutting the enterprise budget and not allowing any new requests.” As a result, absent artificial manipulation, IT is faced with ever-growing demand.

It should not come as a surprise that the business usually is not aware of IT capacities, nor are they concerned about it. I recall a somewhat humorous conversation with a former CEO on IT capacity many years ago. I was requesting to hire an additional 10–15 programmers to improve throughput on some of our more substantial efforts. The CEO listened to my request and then stated, “I don’t understand why we need more people in IT. How many people do you have now?”

“About 720. However, I am only asking for programmers, not any other IT positions.” I replied.

“I don’t understand. I thought all of your people were programmers” was his reply.

After a few moments of explaining the various positions within IT, I was successful in getting the new resources. Nonetheless, this perception is still alive and well in many organizations. Employees outside of IT look at the IT headcount numbers and wonder why activities take so long. You can almost hear the question “can’t they all code?”

7.3 It Only Gets Tougher from Here

To compound matters, it is only going to get worse for IT organizations. Business cycles in all industries are shrinking at an alarming rate. The shorter cycle times result in the business needing quicker solutions to their problems. Technology is also becoming more complicated. Consider the growth of digital tools such as mobile, cloud, social, Internet of Things (IoT), big data, advanced analytics, augmented and virtual reality, and blockchain. These technologies are all relatively new but have already driven tremendous investment in organizations across the globe.

Now consider the even tougher part of this equation. Most IT organizations have a portfolio of aging applications and infrastructure. In some cases, businesses have tightened their budgets over the years, and as a result, application and infrastructure upgrades were delayed or canceled. I often use the analogy of “kicking the can down the road” when it comes to upgrades. Often time the first question from the business is “can we wait another year to do this.” Inevitably, the answer seems to be yes. As a result, what should be a small impact on regular intervals becomes a massive upgrade that is disruptive to IT and the business. To top things off, most of these applications in need of upgrade are tightly integrated with other applications raising the risk of even more disruption.

Let’s recap. IT has traditionally dealt with not having enough capacity to meet demands. Now they are faced with demands coming at an increasing pace, more complexity, and have a collection of applications supporting their respective companies in dire need of being upgraded. Can you think of a bleaker picture?

Neither can I. It is TOUGH (hence the name of the chapter).

So what do we do? The first thing we must do is to look in the mirror and quote the famous philosopher Pogo (the comic strip). “We have met the enemy, and it is us.”

Now, that doesn’t mean everything challenging IT is within IT’s control. However, there are many things IT organizations can be doing to build the capabilities to meet the challenges of today’s business environment. IT organizations must realize that today’s problems can’t be solved with yesterday’s processes and technology. It is time for IT organizations to change.

Deaf Diagnostic

The IT organization must find ways to enable throughput and satisfy end-user demands. A traditional centralized IT structure will struggle to perform at a level needed to satisfy digital consumers of technology services. Enabling end-user self-service better positions IT for success.

If IT must change, what is the silver bullet to solve all of the problems? Trust me; if I had the one thing IT could do to solve all of these challenges, I would be battling Gates and Bezos for the top of Fortune’s wealthiest list.

However, what I do have is a set of things that will better position IT for its role as a critical enabler of digital transformation. The four steps I propose are as follows:
  • Think about talent differently

  • Adopt DevOps and continuous improvement principles

  • Change your development approach

  • Build a data backbone

Sounds simple, right? As the old expression goes “talk is cheap.” Writing or saying what to do isn’t that tough. Each of these changes challenges traditional IT organizations in a myriad of ways. It isn’t simple, it isn’t easy, and unfortunately, it isn’t optional anymore.

I want to provide a word of warning to the reader at this time. The remainder of the chapter speaks specifically to the things that need to be done inside the IT organization to improve speed, agility, and responsiveness to the business. As such, some of the content may become a bit technical. I will strive to keep the discussions at a high level and explain the concepts (and associated acronyms).

7.4 Changing How We Think About Technology Talent

Let’s take the four steps outlined above, one at a time and discuss what IT must change. The first is thinking about talent differently. Traditionally IT has held to the belief we all have to be together working in the same space to make a project work. The problem with that belief is the talent needed to fuel our projects is not all located in the same place. To offset this, IT would use staff augmentation to add to their central staff. In other cases, IT would bring in armies of contractors, managed by contractors to try to meet business demand. IT would also offshore resources or outsource to try to gain throughput.

The problem of relying too heavily on external resources is the system knowledge resides only inside the heads of the contractors. As contractors leave, so does the knowledge. In other words, the knowledge needed to be able to move at speed in the future is no longer inside the company.

The first part of thinking about talent differently is to treat IT talent as a portfolio. In other words, we need to be able to look at our base of talent and answer:
  • What skills do we have?

  • What skills do we need now?

  • What skills will we need in 6 months or a year?

  • Where will we find these skills?

  • How do we retain our current talent (skills)?

  • Are we overweighted in any particular skill?

  • How do we reskill existing employees whose current skills may not be relevant in the future?

To build a talent portfolio, IT must take inventory of every person on the payroll and catalog his or her skills. They must take an honest assessment of demand, both current and future. They must look at their attrition rates and attrition reasons to find opportunities to make positive impacts. Most importantly, they must decide who needs to stay on the bus and who needs to get off the bus.

This level of talent management isn’t revolutionary. IT talks about talent every year during annual reviews. However, IT must become much more adept at evaluating their talent and assessing what is needed to drive at the speed requested by the business.

The second part of thinking about talent differently requires IT to come to grips with the fact that they are in a global war for talent. Consider the report from the World Economic Forum showing China graduated 4.7 M students in Science, Technology, Engineering, and Mathematics (STEM) in 2016. India was a distant second with 2.6 M graduates. The US was third at 568 K (or only 12% of the total STEM graduates in China) (McCarthy, 2017). In the US, IT organizations must find ways to tap into these pools of talent without requiring the talent to relocate. In fact, IT needs to embrace global talent pools and use the time zone differences to its advantage as opposed to using those differences as an excuse.

IT must also take a long hard look at its internal staff and determine who can is best suited for the organization of the future. We will cover this more in Chap. 8, but suffice to say; there will need to be an investment in re-skilling to modernize the talent in most IT organizations.

Deaf Diagnostic

Technology talent is not limited to a single geography. It is essential for the IT organization to implement a global talent strategy enabling the sourcing and retention of staff to support digital technologies.

Finally, we must look at the talent in other parts of the organization. The days of IT battling Shadow IT needs to end. Instead of battling, IT needs to embrace the concept of Line-of-business or “Citizen” developers. Yes, I find it hard to believe I just typed that as well.

As I looked at the constant increases in demand, I realized IT needs more allies to succeed. Clearly, our business wants to succeed. They go around IT because they don’t get the level of service they need. So what happens if IT embraces this? What if IT starts building tools and capabilities that allow the business to self-serve many of their everyday needs?

I recall seeing a backlog in excess of 5 months long in one of my BI organizations. As I looked through the list of requests, it was easy to see the opportunity. The business could have easily resolved more than 60% of the requests had IT provided a set of easy-to-use self-service tools. Instead, our IT organization clung to “control,” and the business worked around IT.

The concept of enabling business development is likely foreign to some readers. However, as I mentioned before, it is already happening. Look at the Excel and Access applications in your organization. Look at the growing number of “business-led” SaaS applications or “micro websites.” In other words, you don’t have control now. Embrace the business, make them a willing partner, and build capabilities allowing them to assemble components easily to form new solutions. Providing these capabilities requires investment in tools to manage data, application program interfaces (API’s), and workflows.

It also requires rethinking how IT has typically organized teams. We touch on this concept in more detail later in the book. Suffice to say, organizing around functions vs. outcomes (or products) does not yield the agility IT needs to transform the organization successfully.

If IT can succeed in thinking about delivery differently and begins to build a framework of tools to enable business self-service, it opens a plethora of new resources, while at the same time satisfying demand before it ever surfaces to IT.

7.5 Adopting DevOps

When I first heard the phrase DevOps, I admittedly struggled to grasp the gravity of the concept. DevOps, short for Development and IT Operations, seemed like a phrase describing what IT should have been doing all along. Simply stated, all parts of IT should work together, not against one another. This statement describes the essence of DevOps. Bridging the gap between the application development and technology operations groups to enable faster, higher quality, and stable software deployments.

While that sounds extraordinarily simple, it isn’t. Traditional IT functions such as Development and Operations often have very different views when it comes to goals and appetite for risk. Development focuses on completing business requests for new features and functions and getting these into production as quickly as possible. Operations, on the other hand, is tasked with ensuring systems are highly available and secure. As such, Operations wants to limit changes that could result in production outages. Hence, conflict arises between Development and Operations as both try to accomplish their stated goals.

At its core, DevOps seeks to increase communication between application development and technology operations, standardize development environments, automate delivery and release processes, and strengthen accountability of all parties. This concept is motherhood and apple pie.

The adoption of DevOps requires deep introspection by the application and operations groups within IT. Some sacred cows (and associated roles) need to be sacrificed on both sides. Goals should be aligned, compromises reached, tools agreed upon, and automation embraced. In the end, a set of self-service tools provides greater speed and agility to developers, while enforcing the rigor needed by operations to sustain or improve reliability.

DevOps adopts many of the principles of Lean, which emerged in the 1990s as a means to improve the productivity and throughput of manufacturing operations. As such, DevOps takes on the same focus as Lean, to dramatically reduce or eliminate waste. In IT, waste comes in many forms. These include:
  • Rework—work done to fix defects

  • No-value Work—working on solutions that do not further the goals of the organization

  • Idle Time—time spent waiting for another function or person

  • Underutilizing talent—time lost by imbalances in the application of talent to solve problems (too much of one skill, not enough of another)

To eliminate these forms of waste, DevOps improves the quality, speed, and operability of technology deployments by:
  • Reducing the size of work deployments (smaller is better)

  • Automating wherever possible to decrease work cycles

  • Reducing or eliminating passing of defects between build phases

  • Developing continuous feedback throughout the build and deploy process

  • Building competencies through repetition

  • Encouraging innovation and risk-taking

Reducing the size of work is a central concept. Smaller work sizes allow for increased frequency of deployments to production. The smaller deployments also reduce the risk of a major outage or disruption.

We discuss the importance of Agile methods later in this chapter. By combining the short delivery cycles of Agile with DevOps, IT can move towards Continuous Integration and Continuous Delivery (CI/CD). CI/CD is the embodiment of smaller work sizes, highly automated workflows, and dedicated product teams to deliver new technology deployments at previously unattainable rates.

Automation and collapsing of processes is also core to DevOps. New advances in virtualization technologies and new methods to encapsulate infrastructure as software instructions, as opposed to physical hardware, affords many opportunities for automation. In fact, the phrase “infrastructure as code” is commonly associated with DevOps automation.

DevOps requires the building of highly automated and comprehensive testing capabilities. These capabilities, coupled with smaller units of work, allow for much more frequent testing. This results in more defects identified earlier in the development process.

A key benefit of DevOps comes from the feedback loop from operations into development. By identifying operational issues earlier in the process, teams can prevent defects from being passed to subsequent phases, saving a tremendous amount of rework. Consider that every hour of rework saved can be spent building and deploying new functionality and it is easy to see why the feedback loop is so important.

IT organizations often face a “crises of the moment” that may be the result of a multitude of factors (internal and external). As an example, a significant fiber cut could disrupt telecommunication services for a large geographic area, impacting significant portions of the organization. Lack of IT preparation and situational awareness can result in lengthy response and repair times. DevOps seeks to reduce these times through repetitive practice exercises that build skills and competencies.

DevOps also encourages teams to be innovative in finding ways to improve quality and throughput. In DevOps, work units are smaller, risks are reduced, and experimentation can occur on a frequent basis. This “test and learn” approach is core to developing breakthroughs that lead to dramatic improvements in productivity.

As we will discuss more in Chap. 8, IT organization structures should shift orientation from project to product. With projects, resources are pulled together to complete specific tasks and then disband. I have watched many technology solutions deployed grow stale and unusable over time because no one “owned” the continued health of the solution. Product teams stay together and support all aspects of their assigned product including new features and functions, enhancements, planned maintenance/fixes, and unplanned fixes.

Many people argue that DevOps principles are not applicable to enterprise application environments containing large numbers of third-party packaged solutions. Admittedly, some parts of DevOps must be adapted to work with packaged solutions. However, I do believe the DevOps principles are incredibly relevant to packaged applications.

As you read the six bullet points earlier describing how DevOps seeks to improve technology deployments, it is impossible to argue these don’t apply to packaged software applications.

However, we have to respect the differences between package applications and custom-developed applications. Some of the key differences relevant to DevOps include:
  • Deployment size—Subdividing packaged applications into smaller components is frequently not an option.

  • Frequency of deployments—Packaged applications deployments are less frequent than custom-built applications and often have deployment schedules dictated by the software vendor.

  • Product Owner—Packaged applications can potentially span multiple business processes, resulting in impacts to more than one product owner.

  • Risk—Due to the larger size and scope of packaged applications, there is an inherently increased risk associated with their deployment.

Understanding these differences is important as it drives the creation of a different pipeline for releasing packaged applications into production. All the principles of DevOps still apply. The packaged application pipeline just factors in the differences described above.

Make no mistake, DevOps isn’t about a single project or tool. It requires a significant cultural shift in the IT organization. As the business and IT begin to erode traditional silos with the adoption of digital technologies, IT must erode its internal silos. DevOps is a critical step in eradicating these internal IT silos.

The DevOps approach is the same as the ones used by such notable technology stalwarts as Facebook, Google, and Netflix. In short, it is proven to be successful at scale. In the 2018 Retail Digital Adoption Survey, 80% of the Digital Leaders had adopted DevOps in at least a portion of their organization compared to only 27% of their peers. The Digital Leaders indicated they very focused on achieving shortened cycle times through continuous automation. As with the technology stalwarts, the Digital Leaders see DevOps as a way to keep pace with rapidly evolving business models (Stone, 2018).

An excellent book explaining the evolution of DevOps is The Phoenix Project by Gene Kim, Kevin Bahr, and George Spafford. An IT person reading this book will see themselves in the characters almost immediately. A layperson might find parts of the book a bit confusing. However, it offers a fairly realistic view of the dysfunction that can occur within IT organizations. During critical technology projects, this type of dysfunction is significantly amplified. The central project described in the book, Phoenix, shares many of the characteristics of digital transformation. To understand the importance and relevance of DevOps, I can’t think of a better source than The Phoenix Project.

7.6 Continuous Improvement Opportunities

DevOps isn’t the only way IT can improve throughput and increase agility. Applying Lean principles to all processes can help IT identify and remediate inefficiencies that inhibit speed to value.

Understandably, it is critical to match the speed of the business. However, equally important is providing a return on investment. A technology effort in progress, with nothing in production, is not providing any value. In fact, it is tying up organization capital. In simple terms, it is a drain on cash flow.

To illustrate the importance of looking across all IT processes, let’s discuss the workflow of a typical request for new technology functionality from the business. Assuming the new functionality is significant enough, IT reviews the option of building in-house versus buying a service or a software application.

If the choice was to buy, corporate policies often dictate a request for proposal (RFP) to be constructed and issued. Vendors are given a specific period of time to respond to the RFP. The organization receives and scores the RFP’s. Based on vendor responses and scoring, finalists are selected.

Finalists are scheduled to come onsite to conduct demonstrations and answer more detailed questions from the business and IT. This process could require multiple iterations.

Completion of the onsite demonstrations often triggers the conducting of reference checks with other users of the application. These reference checks require the business and IT to conduct calls or visit other organizations.

After references are checked a vendor finalist is selected, and contract negotiations begin. Negotiations, in nearly all cases, are an iterative process between sponsors and lawyers on both sides. Parallel to this process, teams are formed, and plans created to prepared for the eventual implementation.

Once the contract is signed, implementation activities can begin. Implementation often starts with business meetings to confirm requirements and walk through the package configuration options. Only when requirements are firm does the building or configuring of the system begin.

Before we go any further, how much elapsed time has passed to get to the point to start configuring the application? The best organizations might accomplish this in 2–3 months. Most organizations would be more than double that.

The other point to consider is how much elapsed time we lose in this process through waiting. At each point of this high-level process to begin configuring a new application it is highly likely people are waiting on something. Perhaps it is waiting on a vendor reply, a business sponsor’s calendar to free-up, vendors to schedule travel, a key IT resource to be available for a meeting, a reference customer to be able to meet, procurement to connect with the vendor, or a lawyer to review a contract. In all, I would estimate that at least 60–70% of the elapsed time is spent waiting.

Idle time is an agility killer. To eradicate idle time and increase throughput requires work across the organization, not just IT. Business sponsors, Procurement, and Legal all have roles to play to reduce this cycle time. Recall, our goal is seeking to identify and eliminate waste across all IT processes. The above example is just one of many ripe for streamlining and automation.

Another form of optimization is found in Cloud computing. If an application is owned by IT and housed on internal servers & networks, there are a myriad of resources needed just to keep it operational. Changes to items such as operating systems, database management systems, middleware, network firewalls, often require regression-testing (and sometimes fixing) applications. Periodically hardware has to be replaced, which once again requires a round of testing and certification. When an application migrates to the cloud, much of this work goes away. The Cloud provider handles all of the changes under the covers. IT maintains connectivity and integrations (if any) to other applications.

Other areas that can be disrupted by automation are call centers. Almost all IT organizations have some form of a call center. As AI and voice response technologies improve, machines will be able to handle more and more calls.

The idea is to be as lean as possible in providing day-to-day services in IT while maintaining very high service levels. Why is this important? Let’s consider a very simplistic example. Assume we have a 15-person group responsible for a Human Resources (HR) application. Each year ten of the associates deal with the day-to-day maintenance of the system (including user support, patches, bug fixes, vendor management). One person does an enhancement, and the remaining four do a project.

Through efficiency gains in DevOps and other automation opportunities, it is not difficult to believe we could improve productivity by over 30% each year (this is a very conservative estimate). This improved productivity would free up five resources. Assuming all other things are unchanged, IT would be able to complete an additional project and enhancement with the same number of people.

Think about that from a business perspective. Assume we paid each associate $50 K a year (all in). We would be spending $750 K for the service from IT. Enabled by IT optimization, the business will spend the same amount next year and get double the new value (projects and enhancements) while maintaining service levels. That puts a smile on the face of a businessperson very quickly.

Deaf Diagnostic

IT organizations spending more than 50% of available resources on the support of existing technologies (“keeping the lights on”) must rationalize and optimize their processes. A substantial support burden impairs IT’s ability to respond to change and is an indication of unneeded complexity in the existing IT operation.

It is all about providing more VALUE for each dollar invested in technology. However, to provide increased value to the organization, IT must come to grips with a truism. In most cases, improving how IT works is more important than IT just doing work. This truism is reflected in the way IT invests in tools and capabilities to improve productivity.

7.7 Changing the Approach to Development

Speaking of adding value, what if IT could deliver projects and enhancements faster, with greater predictability, greater reliability, and with less cost? Talk about getting the attention of the business!

Of course, this is a lot easier said than done. However, there are new methods, techniques, tools, and disciplines that are in practice today that will, if done correctly, reduce delivery cycle times, reduce costs, and significantly improve the repeatability of success. The seven main concepts we must consider to achieve these objectives are listed below.
  1. 1.

    Agile or iterative methodology

     
  2. 2.

    Social collaboration

     
  3. 3.

    API architecture

     
  4. 4.

    Lightweight deployment

     
  5. 5.

    Built-in security and audit

     
  6. 6.

    Cloud leveraged

     
  7. 7.

    End-user self-service

     

Any of these concepts is daunting to an organization that has not adopted them previously. Trying to do all of them together is simply untenable. The order listed is the order I would suggest following. Implementation doesn’t have to be sequential, but it also doesn’t mean to “start all seven tomorrow.” With that, let’s discuss each one in a bit of detail.

7.7.1 Agile Methodology

As we discussed earlier, technology projects struggle or fail at a rate much higher than we would like. A typical step taken by many companies is the adoption of a project methodology. A methodology is essentially a set of processes and techniques used to provide guidance on the appropriate way to execute a project. The two most common types of methodologies used today are Waterfall and Agile. The Waterfall Method emerged in the 1970s as a structured set of phases that cascade (like a waterfall) from one to another. It was, by far, the most commonly used methodology in the 1980s and 1990s.

Agile emerged in the late 1990s and early 2000s. Agile was conceived to combat the common issues plaguing software development. In essence, Agile seeks to provide shorter duration development cycles (often called “sprints”), decreased time to business value, and the ability to accept and adapt to change without significant cost. However, in the PMI 2017 Pulse of the Professional report, we learn only one in five projects used Agile. An additional one in five used some hybrid or blended version of Agil, leaving more than 50% still using traditional methods (Project Management Institute (PMI), 2017).

As we have discussed, IT projects have long suffered from cost and duration overruns. Much of this is due to the concept of “scope creep,” which is the term coined to describe the adding of scope to an effort after it has started. Agile works against scope creep by keeping execution cycles short and iterative. As such, in Agile, change is easily incorporated into the natural flow of an effort. By contrast, the sequential nature of Waterfall makes change more difficult and often occurs in significant rework.

However, armed with this knowledge, many IT organizations today still cling to the traditional waterfall methodology to deploy new systems. Waterfall is not “bad” per se, especially compared to no methodology at all. However, many studies have shown late cycle changes in Waterfall execution are often quite costly (time and money). Also, the sequential nature of Waterfall results in longer project durations and delays the achievement of business value until completion of the project.

I am not advocating the blind adoption of Agile because it is the latest and greatest approach to building applications. I am advocating for Agile because IT needs to shorten delivery cycles, increase involvement and accountability from business partners, and become more adept at dealing with change. Agile provides the framework to do this. You can try to accomplish the same by creating an “iterative waterfall hybrid,” but in the end, you are asking something inherently sequential to be something else. Agile encompasses the principles we need to be successful in today’s volatile development environment.

However, organizations need to do their homework before adopting Agile. There is a myriad of Agile methods such as Scrum, Adaptive Software Development (ASD), Crystal, Extreme Programming (XP), AgileUP (Unified Process), and Dynamic Systems Development Method (DSDM).

All of these versions follow the 12 principles of the Agile Manifesto, but each approaches it in a slightly different way. In fact, many companies choose to either use different versions of Agile for different projects or meld different concepts from the various approaches to create their unique version.

Regardless of the type of Agile method used, all of them share universal concepts that companies must be willing to adopt. These include more substantial involvement from the business community; iterative/frequent releases, less documentation, creation of minimally viable products, and dedicated technology resources.

While each of these concepts is important, the notion of a minimally viable product (MVP) is a struggle for many organizations. An MVP is a working product with the minimal amount of features to allow collection of user behavior. Consider a website for taking customer orders. The MVP version of the website has the basic capabilities to enter information to create an order. Later iterations may add automated customer lookup, extended inventory searches, customer follow-up via text or email, and the ability to send an order as a gift. While these later features may be important, they are not required to test the basic functionality of order creation. In our 2018 Retail Digital Adoption Survey, 50% of respondents noted the concept of a MVP was the biggest inhibitor to the adoption of Agile in their organizations. Just behind the understanding the concept of the MVP was lack of support (leadership) at 44% (Stone, 2018).

It is often difficult for business leaders to discern the difference between non-negotiable (must have), important, and nice-to-have features. Part of this difficulty stems from past projects and experiences with IT prior to using Agile. In the days of sequential project execution, there was a fixed budget and timeline. As it was often unclear if there would be a second phase to a project, the business became accustomed to building extensive feature lists. In Agile, the features evolve as users interact with the solution.

For organizations willing to take on Agile and embrace the MVP concept, the results can be quite dramatic. The most common benefits sited for Agile include higher product quality, higher customer satisfaction, increased project control and visibility, reduced risk, and quicker time to value.

As noted before, companies only achieve these benefits through understanding and implementing those constructs needed to make Agile successful.

As discussed earlier, DevOps is a natural extension of Agile. DevOps leverages the smaller work products inherent in Agile as part of the delivery pipeline to provide speed and agility in software deployments.

7.7.2 Social Collaboration

Social collaboration speaks to increasing the level of participation in the development process for IT and business alike. Through the building and deployment of self-service portals and use of collaborative tools such as JIRA, the level of project transparency can be significantly improved. Also, consider if we achieve the global talent pools discussed earlier, there is a real need to improve visibility and transparency across multiple locations. Digitized tools provide the ability to build a virtual “war room” with capabilities for teams to meet, collaborate, and centrally manage project artifacts. It is critical for all project participants (IT, business, contractors) participate in business-critical discussions and decisions. These discussions can be structured (regular stand-up meetings) or ad-hoc. As such, providing tools for a variety of devices, including mobile, is essential.

To some people, I may have placed social collaboration too high on the list. However, I have seen the struggles of aligning schedules, time zones, and teams to facilitate much-needed discussions. Because of this, I can’t overemphasize how important it is to be proactive in creating a platform to facilitate connections between all team members.

7.7.3 API Architecture

Have you ever considered how much simpler our business lives would be if there were a single application that did everything we need? While enterprise resource planning (ERP) systems promised this, none have been able to provide all of the capabilities required of a complex business. As a result, we live in multi-application, multi-vendor world. This realization places high importance on the ability to build and manage integrations between the various applications required to service business demands.

We have dealt with this issue for decades. IT builds a unique application to integrate two applications (known as a point-to-point integration). As we add more applications, we build more point-to-point integrations until we have an intricate spider web of integrations across a typical organization.

In the early 2000s, technology was introduced to assist with integration. Enterprise Application Integration (EAI) provided a middleware model to sit between applications and provide a standardized way to connect and share information. While this improved the way we connected applications, it still proved to be heavyweight and required tight coordination between the EAI releases and those of the software vendors. Many integration projects using the initial EAI processes struggled due to performance bottlenecks in the central hub of the EAI product and with the connectors to the applications becoming out of sync with the EAI middleware.

The natural evolution of this began to emerge in the past few years. The new methods of integration rely on the concept of an enterprise service bus (ESB). These integration methods provide for the building of lightweight application program interfaces (API) that connect applications to the service bus. The best of this new breed of tools leverage open messaging standards to allow for flexibility and “future proofing” (make it easier to add new applications and update existing one with minimal impact).

The resulting API architecture encompasses essential services such as security, routing, platform (or protocol) navigation, and monitoring/administration. These services are inherited by each integration application to provide consistency and reduce the amount of time needed to “code.” The API architecture provides a lightweight, flexible network of integration points that are centrally managed, scalable, distributable, and connectable to other integration types in the organization.

The API Architecture is implemented in a variety of manners. An example of a related architecture style is microservices. Microservices are much finer grained components of an application that can be modified independently without disruption to the larger application. These microservices also operate independently. They receive, process, and respond to requests from other services. They do this without reliance on an underlying messaging system such as an ESB.

In the 2018 Retail Digital Adoption Survey point-to-point integration was identified as the predominant method used by organizations (63%). However, only 19% of respondents cited point-to-point as their only method for integrating applications. This statistic highlights that many retail organizations are in transition to a more flexible integration platform. 75% of respondents indicated they were using a third-party integration platform or had internally developed a services-oriented integration framework. This percentage was 100% for the Digital Leaders (Stone, 2018).

How an organization chooses to implement component-based integration architecture is dependent on their application portfolio, infrastructure, and talent. Regardless, IT organizations must prepare for these integration components being consumed on a broader scale than just IT.

IT should work to build a network of API’s and expose them in a manner that is easy to search and understand. By doing this, IT enables a framework that may be ultimately used by citizen developers to connect various technology applications and assets to build solutions. Providing access to the citizen developers will likely elicit a degree of skepticism in IT. However, consider the business is already trying to connect disparate applications and data sources using Excel and Access. Wouldn’t it be better to give them the tools to do it in the right way? By coupling API’s with smart workflow tools and analytics, the business suddenly possesses the ability to expand applications in a managed manner, without needing additional assistance from IT. Improved throughput with less IT involvement is indeed a “win-win” scenario.

7.7.4 Lightweight Deployment

The adoption of the API architecture allows IT to take the next step in building agility and speed. The push is to get to smaller, less complex deployments that are easier to consume by the business. For IT, these smaller deployments are easier to manage, as they require a smaller technical (CPU, disk, memory) footprint. These smaller deployments represent a trend in the software industry that is beginning to gain traction within industry companies. This concept is known as containers. A container is a totally self-contained execution environment. The container has its own isolated virtual CPU, memory, storage, and network services. It shares the execution with the host server operating system, resulting in a very nimble execution engine that feels like a virtual machine but does not have all the start-up processes and weight.

In the simplest terms, a container application is isolated from its surroundings. This isolation means a containerized application runs the same regardless of its environment.

This explanation may confuse some non-technical readers. Suffice to say; containers offer tremendous flexibility for developers. A well-constructed container is highly portable, can scale as demands/volume grows, and are much more efficient than traditional virtual computing. Containers also insulate organizations from impacts in changing from on-premise computing to the cloud, as well as moving from one cloud provider to another.

In the 2018 Retail Digital Adoption Survey, 50% of respondents were leveraging containers in some form. More telling, 100% of Digital Leaders were leveraging containers to provide additional flexibility and speed (Stone, 2018).

7.7.5 Built-In Security and Audit

If you ever sat in a war room when a major application was going into production and things weren’t going well you probably heard someone talk about a “trace” or “displays.” These are tools enabling developers to look into what is happening inside the code that may not be readily apparent when watching the outcomes of the application. The problem with traces and displays is you have to add them in after the fact. Consequently, you are somewhat shooting in the dark on where to put them to collect the information that would allow you to pinpoint a problem.

Today many applications are being built on frameworks providing rich information on the use of the application. Included in these frameworks are system logs that can be leveraged to ensure applications are secure.

This new breed of application has many significant benefits. It allows for more in-depth and quicker discovery of issues. It also provides built-in auditability that is sure to make auditors and IT alike very happy.

Also, by leveraging the API Architecture, services can be constructed in a manner that imposes security and control through service inheritance. While providing higher levels of security and auditability, this practice also enhances development speed, as security is built-in rather than “bolted on.”

7.7.6 Cloud Leveraged

Probably the most over-hyped word in technology in the past 5 years other than digital has been “cloud.” As with digital, the term cloud can mean many things to many people. When we discuss cloud, we are explicitly talking about the architecture enabling the access to shared computing resources (such as networks, servers, storage, applications, and platforms), which can be easily configured and scaled.

Clouds are delivered in one of three primary forms:
  • Public—A third party provides computing resources to the general public using a standard cloud model, delivered via the Internet.

  • Private—Providing cloud-computing resources to a single (private) organization. The cloud services can be provided internally (behind the company’s firewalls) or delivered as a dedicated service from a third party.

  • Hybrid—A cloud environment containing a mixture of Public and Private services with a level of orchestration to deliver a seamless experience.

Leveraging cloud architecture is another method to gain speed and agility in your development organization. Leveraging cloud services allows development teams to build test and staging environments through a managed, self-service portal, eliminating the traditional bottlenecks in getting environments ready to use. In fact, the cloud is an essential element to support Agile and DevOps. It provides easy access to services (inside and outside the company) while providing freedom to development teams to innovate and experiment.

The cloud architecture is also used as a means to encourage collaboration and sharing of project artifacts between project team members, regardless of location. In short, cloud architecture in any of its forms is a must to enable speed and agility in the technology organization.

7.7.7 End User Self-Service

The final element of changing your development approach is one we discussed earlier in talent and API’s. It requires IT to begin building applications with end-user self-service in mind.

I won’t rehash my earlier comments, but it is essential IT embrace the notion solution delivery is distributed not centralized. Distributed solution delivery requires IT to find ways to build frameworks and tools to enable business developers, without requiring constant intervention from IT.

In one of my previous employers, there was a well-cultivated citizen developer organization in place before I arrived. Once I came on board I talked to many of these groups across the organization. It became apparent to me this group of citizen developers easily numbered 200–250 people in size, not counting part-timers. These 200–250 associates were essentially working full-time on the development and maintenance of (primarily) Excel and Access solutions.

As I looked at what these developers were doing, it was amazingly inefficient. It wasn’t their fault as they were using the only tools they understood. I was confident we could eliminate almost 60% of the work being done just by providing more structured data access and a better class of end-user tool. Assuming an average salary of $80,000 (which is conservative) and 50% savings, this translates into over $10 M of annual labor dollars.

Let’s look at these savings in another way. Instead of eliminating the labor, what if the labor was repositioned to add new value? Using labor savings to create new value has a massive compounding effect and can deliver many multiples over the base labor savings.

As I stated before, I don’t see a way for IT to win in a completely centralized model. Enabling the business to respond to rapid changes in a managed, self-service manner is the only way IT can ever win the business demand war.

Deaf Diagnostic

In a digital enterprise, speed and agility are prized characteristics. To achieve these characteristics, IT must adopt modern techniques for application development. Traditional waterfall approaches will not be successful in a digit enterprise.

7.8 Building the Data Backbone

The final of the four steps to transform IT is the building of a data backbone. We should ponder that at the core of all information technology is the first word, “information.” All systems are designed to process information (data) in some form or fashion. This principle isn’t going away. In fact, it is more critical than ever. I genuinely believe, in today’s digital world, a company’s future is predicated on their ability to collect as much relevant data as possible and to use it make more rapid, well-informed decisions.

Have you heard this all before? Do you remember phrases like the “data warehouse,” “single version of the truth,” or “data mining?” What about Artificial intelligence (past iterations)? How about Big Data and Hadoop?

Yes, we have talked about the importance of data in business decision-making processes for a long, long time. However, this time it is different. “Why?” you may ask. You need to look no further than your pocket (or wherever you store your smartphone). We live in a hyper-connected world where smart devices, sensors, and applications are producing data at an unprecedented rate.

As we enter a new age of computing, it only makes sense to “double down” on the reason IT exists in the first place, processing data. However, honestly, I still haven’t answered the question of what is different now. To do this, let’s do a short review of the history of decision support sciences.

In the past, we had systems producing transactional data that did not provide an easy way to access the data to create insights. In the late 1980s and early 1990s, the first systems began to emerge across industries aimed at providing richer information to enhance decisions. The “data warehouse” grew out of this movement. Unlike the transactional systems, the data warehouse was architected to organize data in a manner enabling ease and speed of queries. Data warehousing was widespread in the 1990s and led to the emergence of many technology companies devoted to building, maintaining, and leveraging data warehouses. Included in this was the emergence of the modern business intelligence platforms. These platforms were purpose-built to provide a simpler way to develop and present reports, dashboards, and analysis.

These technologies were, for the most part, built on the paradigm of relational database technology, which organizes data into structures known as tables. Each relational table contains rows and columns. This structured data view works well in organizing and managing information from traditional transactional systems. New powerful database technologies such as Teradata emerged to provide the capability to query massive amounts of relational data in a fraction of the time previously possible.

As we moved into the 2000s, the composition of our data began to change. Semi-structured data such as XML became increasingly important to businesses. This data, while adhering to a form of structure, did not follow the traditional relational row/column orientation. As data storage costs continued to drop, more companies sought to mine information from unstructured forms of data such as video, sound (voice), pictures, and unstructured text. These data types do not conform to relational constructs and typically require significant amounts of storage.

Coupled with the growth of these new data structures was an explosion in data created from social applications, mobile devices, sensors, and beacons. The chart below shows the unprecedented growth in data in the past decade (Fig. 7.2).
../images/469519_1_En_7_Chapter/469519_1_En_7_Fig2_HTML.png
Fig. 7.2

Projected data growth

The massive growth in data and the desire to gain insight from mixed data types led to the creation of the MapReduce programming model. MapReduce models provide the ability to do distributed processing of extensive, mixed sets of data while providing redundancy and fault tolerance.

The most popular variant of MapReduce is Hadoop, which was developed to improve web searches for Yahoo. Yahoo released Hadoop to the open source community in 2008, and it has grown wildly in popularity ever since. Today, Hadoop is the de facto standard for processing extremely large, complex volumes of data.

Another outgrowth of having all of this data available to be mined and analyzed is Artificial Intelligence (AI). There are multiple varieties of AI today including machine learning, deep learning, and cognitive computing. All are slightly different but follow a similar principle. The more data provided, the more the algorithms “learn” and thus improves the quality of decisions.

I can’t overstate the importance of this trend to digital. In digital transformation, we are changing the speed at which we need to identify patterns, process data, and make decisions. AI algorithms are the foundation of this capability.

Think about what happens today when you are on a website or mobile application looking at products. Suddenly a plethora of related items appear for your consideration. Likely, these recommendations are the outcome of “learning” algorithms. The algorithms have been trained based on patterns from other shoppers, your personal preferences, and your previous purchases. These recommendations do not require human intervention and happen at the right moment to influence customer purchase behavior.

This same process begins to reshape business processes across the organization. AI algorithms will become the norm for decisions that are fact-based and repeatable.

The data backbone of the organization feeds these AI algorithms. Understandably, the data must be of high quality. We don’t want to train AI systems using data fraught with errors. The data is organized in a way machines, and humans alike can access it. The speed of data refresh must be near real-time as decisions are made in the same (near real-time) manner.

The 2018 Retail Digital Adoption survey underscores the importance of data in driving performance. 100% of Digital Leaders were pursuing big data and advanced analytics capabilities, with 80% having completed at least a portion of their initiative. Only 18% of the other respondents had completed a portion of their big data and advanced analytics initiative (Stone, 2018).

To build a data backbone with these characteristics requires thoughtful planning by IT. The architecture of the data backbone must be able to accommodate scale, speed, and data lineage. Achieving this architecture likely requires multiple technologies as different data use cases may be better served by specialty applications. However, it is critical the organization develops and maintains a firm grasp of its data pipeline (where data originates, where it flows, where it is enriched, and where it is archived). Absent this knowledge, response times to business change are elongated.

Deaf Diagnostic

Digital businesses run on data. As such, to enable successful transformation, organizations must have a well-defined strategy and associated architecture for building enterprise-class analytics capabilities.

To further illustrate the importance of analytics, Gartner noted in the 2018 CIO Agenda that 64% of CIOs at top-performing organizations are very or extremely involved in their enterprise’s BI/analytics activities, with participation by CIOs at typical or trailing organizations much lower (Gartner Group, 2017).

The challenge for IT is clear. If IT is to be a catalyst for change in the business, it must change its ability to respond. The four steps we covered in this chapter are not simple and do not happen overnight. As such, it is critical for IT organizations to be working on these four steps already or be deep into the planning process for action. In the new world of digital technologies, IT must become more of an orchestrator of services and less of a controlling central figure. This new role isn’t something many IT organizations have considered, and it will take some time to make the transition.

Make no mistake about it; we are asking IT to change the tires on its processes while the car is moving. Working changes into existing and upcoming projects will get IT part of the way there. However, the fundamental changes in IT to achieve speed and agility will require focused efforts. It is not just an investment; it is a required investment.

Of course, the investment in IT transformation means very little if you don’t have the talent needed to make it successful. In Chap. 8 we will discuss the talent changes and enhancements needed across the organization to help drive digital transformation.