SG

Side Glance: The Cloud and the Future

How long, then, should we hold to these rules and not break up the game? As long as the game is going nicely.

Epictetus, Discourses

A sublime thought, if happily timed, illumines an entire subject with the vividness of a lightning-flash.

Longinus, On the Sublime

It’s very difficult to keep a computer busy all day. A billion times a second it’s looking around for a new task. No matter how many cat videos you watch, the computer still spends most of its time waiting for you to tell it which cat video you want to see next. This restlessness has been the driver of much of the recent innovation in technology.

I would be undermining my point that uncertainty dominates the digital arena if I were to write a chapter telling you what I think is coming next for enterprise IT. Instead, I’d like to point out some of the important trends of the moment that should influence the way we think about the future.

I’ll start with the cloud, the impact of which is already deeply felt. A logical extension of the cloud is serverless computing. Artificial intelligence, and in particular machine learning, has become practical for general business use and changes our fundamental ideas about computing. All three of these developments have deep implications for the way enterprises use IT and what IT is capable of contributing to the enterprise.

There is an enormous amount of restless computer processing capacity in the world today, and cloud providers are adding more of it. The trends I’ll discuss are ways of keeping our processors from getting bored with us.

The impact of the cloud is deeper than you may think. It’s leading toward the demise of the hardware that has run our enterprise systems, at least as a concern for IT departments. This change is happening at the same time that consumer devices are replacing the desktop and laptop computers we use every day, the devices we actually put our hands on. Computing power, in other words, is in our pockets and in the cloud, and less in the offices and datacenters of our companies.

The cloud’s magic is that it turns computing power into a service. Like electricity in your home—flip a switch and it’s on, flip it again and it’s off. You pay the electric company for what you use. You have no need to burn coal or set up a hydroelectric dam to get your electricity—presumably someone else is doing that, although in any case you have no reason to concern yourself with it.

In the past, enterprises acquired computing power by doing the equivalent of generating their own electricity. They bought computing hardware, installed it in racks in a datacenter, and set it up following the manufacturer’s instructions. They connected it to electricity and to the internet, and provided enough air conditioning to keep the hyperactive, restless computers cool. They set up physical security to prevent bad guys from sneaking into the datacenter. And they provided the labor to manage the computer, upgrade its software, and diagnose its problems.

Organizations using the cloud don’t have to do any of these things. When they need computing power, they use their cloud provider’s website or run an automated script to request it and it’s immediately available. When they’re done computing, they release the computing power and stop paying for it. If they need data storage, that is also immediately available—and just as immediately disposable when it’s no longer needed. In fact, as of the end of 2018 Amazon Web Services (AWS), the pioneer in cloud computing, offers a menu of more than 165 different services an enterprise can consume at any time as a utility—from computing power and storage to networking, security, databases, analytics, machine learning, and even automated call center services. Flip a switch and they’re on, flip it again and they’re off.

One consequence of the cloud’s rise is that information technology, once a matter of both hardware and software, is now increasingly about software only. The IT organization can set up infrastructure in the cloud by writing automated scripts—in other words, by programming it—rather than by fiddling around with hardware. In this book I’ve made the case that agility is critical for digital enterprises; imagine now how much nimbler the transition from hardware to software can make you. Software is something you can change in seconds by entering keystrokes at a keyboard, while hardware can only be changed by replacing it with new hardware.

Another consequence is that organizations can rapidly adjust the amount of computing power they consume. If your website suddenly gets a surge of user traffic, then you can increase the amount of infrastructure you use until the surge dies down. If your employees only work during daylight hours, then you can reduce the amount of infrastructure you’re paying for at night. If you have a seasonal business, then you can reduce your infrastructure spending out of season. Intuit, for example, can save five-sixths of its TurboTax AnswerXchange infrastructure costs by turning off infrastructure when tax season ends.1

Again, the cloud is a lot like that electrical utility company. Cloud providers such as AWS achieve tremendous economies of scale and pass the savings on to its customers. AWS and other providers have put a lot of restless computing capacity in the cloud, and enterprises can use it to make themselves nimbler.

To explain how and why the cloud works, I’ll dive briefly into its history. It starts with the fact that most of the computers permanently connected to the internet are servers, which means they wait for another computer to ask them to do something, do it, and then resume waiting. For example, when your browser displays a web page, your laptop communicates over the internet with a server, which transmits the bits that make up the web page. Other servers handle the company’s HR systems, databases, email, and fantasy football pools.

Even while one of these servers is working, it’s often just one part of the computer that’s busy, while the rest are twiddling their thumb drives. In the 1990s, technologists came up with a solution for all of this wasted computer capacity. They created a piece of software called a hypervisor that lets one server act as if it were many, each called a virtual machine (VM) or an instance. Since each VM would get somewhat busy handling the tasks it was assigned, overall they were able to keep a single physical server reasonably busy most of the time.

This was all possible because the server hardware, it turned out, was pretty much ignored once it was installed in a datacenter rack. Even systems administrators responsible for the servers rarely saw the hardware—they sat in a different room, perhaps even at different geographical location, and communicated with the server over a network, just like everyone else. So as the single physical server morphed into many VMs, no user was any the wiser.

The actual work you want the computer to perform—the software you want it to run—is called the workload. If you had a number of VMs running on a number of servers, and a number of workloads to task them all with, all you had to do was distribute the workloads among the VMs and let them work.

This was the way that Amazon.com had set up its infrastructure, and in 2006 the company had the idea of offering extra VMs to other companies to use for their workloads. It launched something it called the Elastic Compute Cloud, or EC2. Any company could use Amazon’s automated tools to set up a VM on one of Amazon’s servers, load a workload into it, and set it running. No one could tell that the server was in an Amazon facility, since everyone using it was located somewhere else anyway. Your web pages appeared to your customers just as if your web server were in your own headquarters, in a datacenter, or under Gerald’s desk.

The cloud developed from there. In addition to VMs, cloud providers also began to offer data storage for a fee. Each started replicating its infrastructure in multiple locations so that workloads would be resilient in the face of disasters. And the cloud providers began offering software capabilities in addition to infrastructure, with fees again based on usage.

Cloud infrastructure is now globally distributed. AWS alone has infrastructure in twenty locations as of the end of 2018, each consisting of at least three separate datacenters to provide resilience against disasters. Businesses benefit from the effort AWS puts into ensuring that its infrastructure is reliable and secure. Among its users today are federal agencies, such as the Department of Homeland Security and intelligence agencies; banks and financial services firms such as JPMorgan Chase and Capital One; and highly trafficked consumer companies such as Netflix and Expedia.2

The next phase in cloud evolution is called serverless computing. Since providers have abstracted away physical infrastructure into the virtual world, the next step is to altogether eliminate the concept of infrastructure. With serverless computing, which is already available but will likely become more common in the future, IT folks no longer need to explicitly provision infrastructure in the cloud. Instead they can simply provide some code and specify under what conditions it should run—that is, when compute power should be applied to it. The cloud takes care of the rest. Serverless computing brings us one step closer to computing as a utility.

Many companies are already using serverless computing, but the wave of innovation it will bring, I think, is just starting. As it develops, the IT budget will continue to move toward compute as an operational resource that is consumed, rather than an investment in hardware as a fixed asset.

The cloud is one of the drivers of the digital revolution. It has reduced barriers to entry across industries, since new entrants no longer have to invest in hardware and datacenters before becoming operational. The infrastructure they need is immediately available and their costs—based on volume of use—remain low while their businesses have a chance to grow.

The cloud helps provide the agility that enterprises need if they’re to succeed in their digital transformation. In addition to cost advantages and the ability to quickly scale up and down based on demand, it promotes agility by letting them try out innovative ideas, all the while decreasing their lead times and making it easy for them to scale globally.

Innovation. When used together with DevOps, the cloud helps an enterprise innovate and find new opportunities for growth, as it can try out ideas with little risk and minimal cost. If an innovative idea doesn’t work out, the enterprise can simply stop paying for the infrastructure and services it used to launch it.

Velocity. The cloud shortens lead times by eliminating the need to acquire and set up hardware. It gives an enterprise immediate access to powerful capabilities it can include in its applications, such as machine learning, identity management, and analytics.

Global reach. With twenty regional datacenter clusters and 160 other global points of presence where it has infrastructure, AWS makes it easier for an enterprise to serve a global market. It maintains fast performance in each region and satisfies data sovereignty requirements of different countries (rules that data has to stay within a given country’s borders).

The easy availability of electricity led to the explosion of innovation and creativity that resulted in all of the household appliances we use today. We don’t yet know what the easy availability of computing power will make possible. But we can already see that organizations are taking advantage of it to do things that would have been impossible a few years ago.

McDonald’s, for example, runs its global point of sale (POS) systems on AWS, including 200,000 registers and 300,000 other POS devices serving sixty-four million customers a day.3 When it decided to develop a mobile home delivery application, it was able to create and deploy it to customers in only four months.4 GE Oil & Gas was able to speed up its application development time by 77%.5 BMW was able to develop its car-as-a-sensor app (CARASSO) for its 7 Series cars in a matter of six months.6 Such is the speed of the digital world.

These are not just examples of performing tasks on an exceptionally large scale, but of doing things that simply would not have been possible before.

Around since the 1950s, artificial intelligence is the science of giving computers human intelligence-like abilities. Scientists have tried to solve a range of problems—playing chess, recognizing objects in photos, understanding human languages, controlling robots, and the like—using a variety of software engineering techniques. Of those, machine learning has become practical for business use, partly because of advances in the field and partly because new, fast, restless hardware has made practical the intensive computing that is required.

Machine learning also opens up for solution a whole new class of problems that would otherwise have been too complex. As an example, consider the task of recognizing handwritten numerals. Since everyone’s handwriting is different, creating a computer program able to scan an image and identify the number it represents would be practically impossible. But with machine learning, we can show the machine a large number of handwritten numeral examples and inform it what each number is. This process is called supervised training. The computer uses that training data to construct a model it can apply to subsequent images. When you think about it, this is also how people learn to distinguish handwritten numbers—by seeing lots of examples and generalizing from them.

Machine learning is a complex subject and an endless source of topics for PhD dissertations. But in its quest to “democratize” machine learning, AWS has pre-trained models for common tasks that can be used by anyone—even those totally unfamiliar with machine learning. These pre-trained models can be used to recognize images in still photographs or video; to synthesize human-like speech in a number of languages; to understand text written in natural languages like English; to translate between languages; and to transcribe speech into text.

C-SPAN uses machine learning to recognize politicians in its video footage, then index the footage for easy access.10 Stanford University uses it to scan retina images to diagnose diabetic blindness before the human eye would be able to.11 Fraud.net, a collaborative effort of online merchants, uses it to spot instances of potential fraud,12 and Hudl uses machine learning for predictive analysis on sports plays.13

The cloud, serverless computing, and machine learning make possible entirely new categories of applications, businesses, and IT processes. As technical advancements that also have deep business implications, I’m choosing these as good examples of how you can realize business benefits when the technologists play a role in mapping out your enterprise’s future. It’s the combination of technology and business savvy that is so important, and it’s difficult to realize unless you bring the technologists and “business” people together to innovate.