CHAPTER 4
Stop the Line

Cybersecurity is viewed as a critical defensive effort for our business. If you think about what we do and what we provide—water—in the event of something compromising our systems and if our customers lose faith in our ability to provide good, clean, safe drinking water, it’s catastrophic.

CEO, Utilities Company

The year was 2017. We saw the beginnings of an investigation of suspected tampering by Russians in the U.S. presidential election. We witnessed the rise of the #MeToo movement and the subsequent fall of entertainment and business tycoons crushed under its weight. We endured catastrophic losses, courtesy of Mother Nature’s wrath, with some of the most devastating hurricanes in recent history, including Harvey, Irma, and Maria.

Amid all this turmoil that seemed to characterize 2017, we lost a luminary whose contributions to business will remain with us long into the foreseeable future. Tatsuro Toyoda, son of the founder of the Japanese automotive company Toyota, passed away quietly and without much media fanfare on December 30. He was 88 years old.

You’re more than likely familiar with auto giant Toyota. But you may not realize how profoundly one of its forefathers, Toyoda-san, influenced the way you and I do business today.

To understand Toyoda’s impact, we must go back to a different era—the early 1980s—when he took the reins of his company’s first American factory. At the time, American auto manufacturers dominated in market share. General Motors (GM) was far and away the world’s largest car company. Despite its success, it had a challenge: the U.S. government’s emission guidelines forced auto companies to produce small, fuel-efficient cars, and GM had historically struggled in doing so. At the same time, there was a disturbing trend emerging in the U.S. auto market overall. Japanese car companies were starting to grab market share fast—so much so that the U.S. Congress was threatening to restrict auto imports.

This strange confluence of real and potential U.S. regulations—one that forced GM to produce small, fuel-efficient cars and another that threatened to limit Toyota’s car imports—created the unlikeliest of unions. Toyota and GM, fierce competitors in the market, became partners of sorts. The two companies teamed up to open a joint plant in Fremont, CA, home to a former GM production facility.

Each competitor got value from the interesting arrangement. Toyota would build GM a quality fuel-efficient small car that would finally turn a profit. In the process, GM would gain access to Toyota’s manufacturing principles—literally peeking in the cupboard to discover its competitor’s secret sauce. In turn, Toyota would learn to build cars in the United States and mitigate the risk of future car import restrictions.

Now, about that former GM plant in Fremont. If this “partnership” between two rivals wasn’t weird enough, the location they chose to set up shop bordered on the bizarre. The former GM plant in Fremont was an unmitigated disaster. Bruce Lee, who ran the western region for the United Auto Workers union at the time, called the Fremont employees “the worst workforce in the automobile industry in the United States.”1 Their behavior was infamous in the industry—from record absenteeism to lewd acts on the factory floor to sabotaging cars for the chance to earn overtime to fix them. If conduct was any indicator, these employees hated their jobs and their employer.

You may not find GM and Toyota’s decision to establish their new joint operation in a manufacturing facility that had bad history as unusual. After all, why throw a baby out with the bathwater? The physical plant may have been completely acceptable. But the labor force needed changing—badly.

But GM and Toyota didn’t change the employees. In fact, they hired 85 percent of that former Fremont workforce (described as the worst in America by one of their own union leaders) to tackle the challenge of producing a profitable and high-quality fuel-efficient automobile—something GM hadn’t accomplished in the best of its U.S. plants.

Perhaps even more surprising, they flew some of those former Fremont employees to Japan to learn how to build cars the Japanese way. The first car, a yellow Chevrolet Nova, rolled off the assembly line in December of 1984. Almost right away, the Fremont plant was producing cars at the same speed and with as few defects per 100 vehicles as those produced in Japan.

What exactly was it that allowed Toyota to produce the first quality small car for GM using virtually the same workforce that was so subpar under the GM regime? It wasn’t a secret sauce as much as it was a secret ingredient: continuous improvement.

GM adopted its philosophy from Henry Ford. Factories were highly departmentalized, division of labor was the standard, and efficiency was king. If we had to sum up the GM culture with a bumper sticker slogan, it might be simply “Never stop the line.” In fact, those four words were a cardinal rule in the former GM plant. No matter what, the assembly line could never stop.

An interesting mandate, since so few workers showed up on some mornings that the production line couldn’t even start. On those days, GM would bring in people off the street to fill the void. And when the production line finally started, it didn’t stop.

Billy Haggerty worked in hood and fender assembly. Rick Madrid built Chevy trucks for the plant. Along with Bruce Lee, they were interviewed by NPR to give the rest of us a glimpse into just how strictly that golden rule was practiced at the former GM Fremont plant.

Workers would deal with defects after the fact. For GM, it was about quantity over quality.

Toyota built its philosophy on continuous improvement. Not only were employees allowed to stop the line, but Toyota encouraged them to do so. The “andon cord” was within reach above the head of every worker on the line. (Andon comes from the Japanese word for paper lantern. Accordingly, workers could visually signal whether things on the line were just fine—green light; whether production quality was at risk—yellow; or whether a decline in quality meant it was imperative to stop the line—a red light.)

Toyota wasn’t just building cars; they were building a process—one that welcomed good ideas from any level of the company. One that relentlessly pursued and obliterated every inefficiency. One that persisted to set the bar higher and higher.

This “Stop the Line” philosophy empowered every employee to be a plant manager of sorts and made quality an inherent part of everyone’s job responsibility. Toyoda brought the Japanese philosophy to U.S. auto workers. When the White House Automotive Task Force assessed GM decades later in 2009 during the latter’s Chapter 11 bankruptcy, it publicly acknowledged GM’s global production and procurement system, modeled on Toyota’s, as world-class and every bit as efficient as its Japanese teacher’s.3 As Toyoda was the leader of Toyota’s venture into Fremont, GM owes much of their tutelage to the late visionary himself.

But Toyoda’s impacts extend far beyond GM or even to the automotive industry. At around the same time as the Fremont plant’s success in the 1980s, the Total Quality Management (TQM) movement gained traction across multiple industries—where continuous improvement was the new face of management and quality the prize.

TQM eventually gave way to other movements like ISO 9000, Six Sigma, and lean manufacturing. While the standards and the names may have changed, the guiding principle didn’t. Quality was no longer a nice-to-have; it was essential to a company’s long-term success.

Security, as an endeavor, is similar to where quality was before we all caught TQM fever. When designing products and processes, we tend to think of security as tomorrow’s problem. It’s something we’ll get around to after a product exits the line. We’ll fix it after the fact. In many ways, security is an aftermarket afterthought. And that thinking must change if we are to embed security in everything we deliver.

If not, the results could be catastrophic.

The Internet of Terrorism

The year was 2016. You may not realize it, but a war occurred on October 21 of that year. It lasted only a day, but it will go down in infamy in the annals of cybersecurity. Because that’s when the Internet’s ramparts were breached by marauders intent on chaos at a scale not previously seen. It was the day the Internet broke.

The events of October 21, 2016, prove that truth can be stranger than fiction. What was unusual about that day was not that hackers deliberately set their sights on a company to inflict damage. That kind of headline news has become all too commonplace in our digital world. What was unique about the attack was the company that found itself in hackers’ crosshairs: Dyn.

While you may not have heard of Dyn in 2016, you undoubtedly used the services they enabled: Twitter, Netflix, Spotify, and Etsy, to name just a few. Among other things, Dyn was a domain name system (DNS) provider. When you entered a website address for one of these popular services, Dyn mapped it to its corresponding IP location.

On that fateful October day, hackers crippled Dyn with a distributed denial of service (DDoS) attack. DDoS attacks are a perennial favorite scourge among hackers. They occur when hackers flood a website with excessive traffic to effectively bring it to its knees. Our Internet is comprised of multiple routes and destinations, not unlike an intricate highway system. When enemies target a website, they can launch a DDoS attack to congest it to the point of failure. While DDoS assaults are among hackers’ more popular threat varieties, there were two notable exceptions to the Dyn attack that made it categorically special.

First, most DDoS threats target a particular company. In fact, there are companies in certain industries more susceptible to these pernicious launches than others. Take online gaming, one of the more popular industries targeted for DDoS assaults. If you’re an online gamer, you likely know why. Many online games are of the “high-twitch” variety, meaning they rely on a player’s fast reaction time for victory to ensue. If a hacker can congest the virtual highways that connect online players to their shared experience, response times slow to a grinding halt, making it impossible for you to shoot your opponent before he can fire a bullet in your direction, as an example. Though uniquely harmful to online gaming outfits, DDoS attacks can wreak havoc for any company with a digital presence. By flooding a website with garbage traffic, hackers can deny legitimate visitors access.

But the Dyn attack was distinct. It took out multiple websites with one hit. By congesting Dyn’s DNS service—the metaphorical equivalent of the postal address system on the web—hackers blocked entry to several popular sites. In one fell swoop, an entire region of the United States found some of the most popular web services unavailable as the Internet backbone itself gave way. If you imagine our nation’s participation on the Internet as a nighttime view from space, on that fateful October day, the Eastern seaboard simply disappeared. In our hypothetical view from on high, tens of millions of people and the digital infrastructure they rely on was swallowed up by darkness to apparently join the Atlantic Ocean’s vast expanse. Hackers effectively erased the digital existence of one of the most powerful corridors of power on our planet.

The Dyn attack was unique in another way. DDoS attacks require enormous scale to execute. The typical website won’t be buckled by a few hundred or even a thousand motivated hackers each pinging it at the same time. It would require millions of attempts to deliver a crushing blow.

This is where another cybersecurity term that has made its way into the lexicon—a botnet—comes into play. In laymen’s terms, a botnet is comprised of a network of devices enslaved by a hacker. The zombie army that results is under the command and control of one or more bad actors, who commandeer the drones to do any number of actions, including launching a distributed denial of service attack by flooding a website with requests. How do these botnets come under the control of their evil overlords? It happens through malicious software deposited on computers when users visit an infected site or download a contaminated message. In many cases, users are unaware their device has even been seized.

You might wonder how exactly the DDoS attack on Dyn was so different. The botnet employed was not an army of computers, or even mobile devices, which as you might expect are common soldiers for a hacker’s zombie legion. Instead, cybercriminals compromised ordinary connected household devices. Baby monitors, security cameras, and digital video recorders (DVRs) were among the recruits in the botnet that took down Dyn. Hackers exploited weak factory-default passwords in these connected devices to seize control. No user had to first unknowingly download malicious software. In this case, consumers did nothing to compromise their own devices. As it turns out, nothing is exactly what fit the bill, since users also didn’t think to change their devices’ default passwords, assuming the average user would know how to do so in the first place.

Within a moment of the Dyn network’s collapse, the Internet of Things (IoT) became the Internet of Terrorism. And, lest you believe that the Dyn attack was a fluke, contemplate one more frightening reality: The botnet that disabled Dyn was actively recruiting enrollees for its next attack several months after the incident. In an always-on world, hackers have billions of connected devices at their mercy, simply awaiting their next command from their new leader.

The Dyn DDoS attack opened our eyes to a new reality: The targets have become the weapons. What we used to protect, we must now be protected against. Just as data can be manipulated to deceive us, those harmless connected devices that make our lives convenient in so many ways at home and work can be turned against us to cripple the digital infrastructure upon which we (and they) rely. Cyber threats are now so pervasive that they lurk around every connected device, every bit of data we take for granted.

The Dyn attack proved that innovative hackers can exploit any connected product or service in their next attack. When cybercriminals slithered in through ordinary household appliances, cybersecurity slipped into the mainstream of product development.

Take connected cars, as just one example. In 2017, Toyota, Intel, and others formed the Automotive Edge Computing Consortium. The group estimated that the data volume between vehicles and the cloud will reach 10 exabytes per month around 2025—a projected 10,000-fold increase from 2017’s baseline. That’s the equivalent to twice the volume of all words ever spoken by humans since the dawn of time.4

Weaponizing the IoT to take down the Internet is one thing. Weaponizing the eight million autonomous cars expected to be on roads by 20255 makes the Internet of Terrorism that much scarier. Those connected vehicles will depend on accurate, real-time data to assess their surroundings and navigate accordingly. They will rely on the 10 exabytes of data per month flowing between them and the cloud—data that will be a veritable treasure trove for hackers less motivated by profit than terror.

If you think your company doesn’t have to worry about such concerns since it’s not in the business of Internet backbones or connected cars, adversaries welcome your indifference on this topic. But Dyn shows that your company can simply be caught in a hacker’s crossfire. Dyn wasn’t the target of the attack. The Internet services they supported were.

If that still doesn’t compel you to think differently of the brave new world in which we find ourselves, consider this: How many of your employees work from home on an occasional basis? Now consider that the average home had 10 connected devices in 2016, the year of the Dyn attack, per Intel. The semiconductor giant predicted at the time that figure would rise to 50 connected devices per household by 2020.6 If left unsecure, those devices can serve as the potential onramp for hackers to compromise your employees while working from their home offices—and potentially seep into your company as a result. The edge of the corporate network is now in the home.

Lines are blurring. Home and work are colliding. The products and services we use as consumers, we use as employees. The IoT is just one example of how cybersecurity permeates more than what meets the eye. For any employee charged with product or service development, you are key to stopping the line whenever cybersecurity is conspicuous by its absence.

In this case, it’s not just your company depending on you for the same. It’s every user directly or indirectly touched by your creation.

W.I.S.D.O.M. for the Product Developer

Product developers are the first line of defense in embedding cybersecurity requirements in every product or service a company offers. There’s no need to reinvent the wheel on sound product development principles and checklists (of which there is no shortage in the market). By integrating cybersecurity concepts into tried-and-true design methodologies, developers play an essential role in hardening their companies’—and even their users’—cybersecurity defenses.

First, in the initial phases of design, customer input is imperative. Many companies actively solicit this feedback from customers through quantitative research, one-on-one interviews, customer advisory councils, and/or other methods. A simple, but profound, way to ensure cybersecurity is not an aftermarket afterthought is to straightforwardly ask customers about their cybersecurity requirements as part of this discovery phase.

Let’s say you’re in the business of manufacturing a [fill-in-the-blank] connected household device (like the kind used in the drone army to take down Dyn). Discovering when consumers would be more likely to take action in changing a factory-default password would be an important element in the design phase. Better yet, eliminating procrastination from the consumer during this phase by, say, requiring a password change through an intuitive application when the connected device is installed removes a key security vulnerability right out of the box.

Now let’s imagine you’re in software development. And let’s assume that consumers aren’t your target. You market to businesses. The same principle applies. In this case, finding out exactly how your customer will use your application securely prevents the need for bug-fixing down the road. Will your clients rely on the cloud, or do they prohibit the cloud for data upon which your application depends? On the flip side, have you designed a product to live within a firewall only to find your customer prefers a software-as-a-service (SaaS) consumption model? These and many other qualifying questions during the discovery stage are invaluable to implanting security in the foundation of your product.

Since we’re on the topic of software developers, here’s another one for you. Make sure security is part of any minimum viable product (MVP). When Eric Ries, author of The Lean Startup, popularized MVP, he captured the essence of what any startup endeavors to do: launch a new product that allows a development team to collect the maximum information on customer usage and acceptance with the least amount of effort. The process lends itself well to cloud-based applications that have very few upfront capital requirements (since many leverage public cloud infrastructure offered by AWS, Azure, and Google). Essentially, it allows for quick learning, fast iterations in product development, and a continuous feedback loop with customers. All good stuff—especially if it lets a start-up (or any company) avoid the costly mistake of launching a software product that fizzles in the market.

Here’s the thing: Do not focus your attention so much on the word “minimum” in the moniker that you ignore the one right after it, “viable.” Security must be built in, not bolted on, whether for a product intended only for the early adopter or for the mass market. Security must be part of your minimum viable product requirements.

Next, when designing the product, be deliberate about how, where, and for what amount of time your company will use customer data. There are regulations that will force your company to care about such matters, assuming it needs the extra motivation. So what I’m talking about here are not the mandated requirements issued by regulatory authorities. Instead, I’m referring to the ethical standards to which your company will hold itself accountable. To be clear, the higher of the two bars—the legal requirement or your company’s ethical principle—is the waterline on this test.

For example, in the first chapter, I discussed a hacker’s defacement to McAfee’s company page on a popular social media platform. While there was no customer data in play with our event, it will help me paint the picture of what I mean on this point. Since there was no data stolen or any McAfee system compromised, McAfee was under no legal obligation to report the hack at all, let alone alert the media.

But McAfee holds itself to a higher standard than the legal requirement. In this case, the social media pages of our employees were also defaced, along with McAfee’s company profile page, when the hacker exchanged our logo for a graphic image. That image showed up for several hours on the personal page for any employee who had McAfee listed on his profile. McAfee has a company value we uphold with employees: We practice inclusive candor and transparency. While the legal limit did not require any reporting on our part, our own company value demanded more.

So the day after the hack occurred, we published a front-page intranet story admitting the same to all employees. We knew the risk was that a story that had not received any real media attention (given the attack occurred on an otherwise quiet Easter Sunday) could ignite on a Monday morning should just one employee send it to the media. Thankfully, that didn’t happen. In the end, it was a risk we had to take if we were to uphold a company value that exceeded our legal requirement.

So define your data requirements clearly and consciously in the design of any new product or service—upholding the higher of either legal requirement or ethical standard. I’ll give you one more example to put a finer point on it. To date, one of the only U.S. regulators to issue explicit guidance on reporting a ransomware attack is the Department of Health and Human Services (this instance of regulatory guidance is an important step forward, especially considering that healthcare is a frequent target of ransomware; but, given that it stands in exceptional company, it’s also representative of why ransomware is likely much worse than the actual reported industry figures would have you believe). If your company has a similar value of transparency with customers and would require disclosure of a ransomware attack, then you will likely have a different point of view on how much risk you want to take in storing customer data, even if the law may be more permissive in this regard (that is, until the U.S. and other countries take the EU’s lead and revisit guidance associated with customer privacy).

Next, build security ownership into each phase of the product lifecycle. How will your product or service be upgraded to address the next threat vector? What department is accountable for ongoing maintenance and patching? Who is accountable for handling incidents after a breach occurs? Where will budgets sit? You get the idea. Be explicit about swim lanes across the company and where and when security decisions must be made and resources allocated.

The effort required to introduce a product can be simple when compared against that needed to maintain it. It’s only after the first customers adopt your creation that the inevitable issues of support, scalability, and—yes—security rear their ugly heads. By then, the warm-and-fuzzy feelings of your triumphant launch will be distant memories. Having clear ownership for these certainties established upfront, before your product ever reaches its first customer or user, will save time and mitigate risk down the road.

It’s no different from the discipline you likely already practice when contemplating other key issues in the product lifecycle. In addition to signing off that support and scalability concerns have been planned for, the highest functional leader in the company for each department (such as customer support, engineering, or marketing) should also explicitly review and approve that the security requirements for her function have been sufficiently built into the product before it is introduced to market.

Last, but certainly not least, you must stop the line should security be lacking or missing at any part of the product launch process. More importantly, you should actively encourage this stop-the-line philosophy for any employee engaged directly or indirectly in the development process. This is never easy to execute since time is money in business. But it’s harder still for companies that don’t have all workers concentrated in one physical location (like that Toyota plant in Fremont, with tangible cues in its environment, including the universal colors of red, yellow, or green that immediately and unmistakably communicated the current state of quality to all plant employees). Likewise, many companies don’t have a pull cord or button that employees can activate to immediately stop a line. There’s no physical production line where software is concerned, for example.

In fact, if you’re like me, you work for an employer with a hybrid and distributed environment of company locations and remote employees. So you’ll need to be resourceful in using the virtual tools at your disposal to communicate when an employee has stopped the line. Companies are accustomed to lots of fanfare upon a product’s successful launch. It’s only natural to applaud such a milestone. While not losing a celebratory culture upon a product’s release, consider practicing the same level of recognition when an employee stops the production line due to a security flaw.

Employees notice reward and recognition in companies. It’s one part of the culture that is highly visible and clearly communicates what is (and is not) important to senior leadership. When an employee stops the line, find a way to reward her. Then publicly recognize her through any communication vehicle most leveraged by your company (here’s where your HR and marketing colleagues can help you). The point is that you want employees to know that your company takes security as seriously as it does quality. When they see for themselves how you’ve changed your reward-and-recognition system acknowledging that, you’ll find more diligent employees speaking up when security is falling down. And you’ll save your company from potential financial, reputational, or intellectual property losses down the road.

* * *

Companies have come a long way from the cavalier, haphazard mentality once given to product quality. Educated buyers can easily check a company’s quality track record before making a commitment they may regret. Today, companies know the value of a quality experience. They measure it in customer satisfaction, net promoter scores, and retention. They compete for it in customer awards. And they see the value of it in their financial results.

Cybersecurity is the quality of our generation. Like quality before it, it’s underestimated, underappreciated, or simply unmeasured. A good friend of mine often says, “What’s bred in the bones will come out in the flesh.” A company with strong cybersecurity marrow in its bones stands to benefit in less risk, fewer financial losses, more customer trust, and, yes, higher quality thanks to products inherently designed and built with both cybersecurity and quality in mind. As the dreamers and designers of tomorrow’s products, developers have an essential role to play in embedding cybersecurity in everything we consume.

Notes

  1. 1 NPR, “The End of the Line for GM-Toyota Joint Venture,” March 26, 2010, https://www.npr.org/templates/transcript/transcript.php ?storyId=125229157.
  2. 2 Ibid.
  3. 3 David Kiley, “Goodbye, NUMMI: How a Plant Changed the Culture of Car-Making,” Popular Mechanics, April 2, 2010, https://www.popu larmechanics.com/cars/a5514/4350856/.
  4. 4 Verlyn Klinkenborg, “Editorial Observer; Trying to Measure the Amount of Information That Humans Create,” The New York Times, November 12, 2003, https://www.nytimes.com/2003/11/12/opinion /editorial-observer-trying-measure-amount-information-that -humans-create.html.
  5. 5 Bret Kenwell, “This Is How Many Autonomous Cars Will Be on the Road in 2025,” TheStreet.com, April 23, 2018, https://www.thestreet .com/technology/this-many-autonomous-cars-will-be-on-the-road -in-2025-14564388.
  6. 6 Shilipa Phadnis, “Households Have 10 Connected Devices Now, Will Rise to 50 by 2020,” ETCIO.com, August 19, 2016, https://cio .economictimes.indiatimes.com/news/internet-of-things/households -have-10-connected-devices-now-will-rise-to-50-by-2020/53765773.