Employees are a company’s greatest asset—they’re your competitive advantage.
—ANNE M. MULCAHY
WE SPENT CHAPTERS 6, 7, AND 8 DEFINING what a Healthy Building is. Now we’ll look at examples of how to measure the health impact of a building because, as the influential management guru Peter Drucker famously put it, “If you can’t measure it, you can’t improve it.” To date, no one is really measuring building performance effectively. But you can—and you should. In this chapter, we want to show you how. We’ll do it in two parts, first by showing you how it’s done badly, and then by showing you how to do it right.
Felix Barber and Rainer Strack wrote an article in the Harvard Business Review called “The Surprising Economics of a ‘People Business,’ ” in which they argued that the performance of employees drives the bottom line.1 We agree. And though it’s not exactly a revelation that in jobs calling for human labor, or wisdom, or creativity, or analytics, the performance of employees will affect the company’s performance, their key insight was captured in this sentence: “Business performance measures and management practices don’t reflect the particular economics of people-driven businesses.” In short, there is a disconnect—we know human performance drives company performance, but we’re terrible at measuring it.
In fact, we’re not just terrible at measuring it; oftentimes we are measuring the wrong thing. Take the work of leading Silicon Valley investor John Doerr and the insights he offers in his book Measure What Matters.2 Doerr is chairman of Kleiner Perkins and was an early backer of Amazon, Google, Uber, and other companies. His work on what he calls “OKRs”—Objectives and Key Results—extends Drucker’s ideas into the startup and innovation world, charting the path from hope to execution. Doerr has helped move companies from measuring Key Performance Indicators (KPIs) that don’t matter to measuring those that do. The most well-known, and most important, of his interventions were his conversations with the cofounders of Google, Larry Page and Sergey Brin, back in the days when Google was still being run out of a garage in Menlo Park. He convinced them to use his OKR system, which was expressly designed to measure and track success.
We want to marry Barber and Strack’s insight and Doerr’s rigor to extend the “what gets measured” line of thinking to include the health performance of buildings—and to advance the toolkit for measuring the right things. Our central thesis in this book is not only that employee performance drives the bottom line but also that the building (or indoor environment) plays a vital role in optimizing that human performance, and that this building performance has been mismeasured to date. We are putting far too much faith in self-reported employee surveys, which, as you will see (and as any epidemiologist would tell you), have a tendency to be wildly misrepresentative.
One of the most commonly used tools to measure building-related productivity and performance is the “Post-occupancy Survey” (also commonly known as a “Post-occupancy Evaluation”; we are going to stick with “survey,” for reasons you’ll soon see). As people have begun to appreciate the value of Healthy Buildings, there are now all sorts of claims being bandied about regarding the health of a building and the productivity of employees. Since some of these claims are based on surveys, they require some scrutiny. We will look at five real-world examples of Post-occupancy Survey data and consider how the data is being used to describe the impact of a workspace on productivity and health. Our goal here is not to say that surveys cannot be used at all but rather that if they are going to be used, they must be used very carefully. For each fatal flaw, we’ll also give you the solution for how to avoid it, because as Joe’s brother Brian always says, if you point out problems without offering a solution, that’s just complaining.
Warning: what follows will be an equal-opportunity critique.
Here are five claims made by companies about a new office space they’ve designed or moved into after conducting a Post-occupancy Survey. We’ve picked a few especially notable examples, but the reality is that everyone is doing this.
Well, to any untrained eye with even a slight bit of healthy skepticism, much of this reads just as it should: like B.S. That’s why we like calling these “Post-occupancy Surveys,” so we can use the acronym POS (use your imagination).
To keep this more highbrow, we’ll couch our critique about why POSs are problematic using two epidemiological concepts: selection bias and dependent measurement error. If this sounds like it might get too technical, rest assured, every time you read a word or phrase that belongs in an epidemiology textbook, you can simply swap in the phrase “common sense.”
At its most basic, selection bias happens when the people who take the survey don’t represent the underlying population that could, and should, be queried. When looking at any of these bold conclusions from Companies A–E, we should immediately be asking ourselves a few questions about sample size, representativeness, and loss to follow-up:
We don’t know any of the specifics that underpin the results from Company A, and we don’t mean to imply it did anything wrong, but let’s put some hypothetical numbers to this to see how these four aspects of selection bias could potentially influence our interpretation of the headline finding that “91.6 percent of employees say they feel healthier because of indoor air quality improvement.”
There are a few ways to get to that 91.6 percent number. Let’s suppose that Company A is a 600-person company, the company sent the survey to all 600 employees, and 500 ended up responding and taking the survey—a pretty good sample size and response rate (83 percent). If this were the case, we would know that 458 employees reported feeling better because of indoor air quality (458 / 500 = 91.6 percent).
But what if, in that same 600-person company, only 83 people took the survey? That would mean that 76 people responded positively about the building, giving us the same reported percentage of people satisfied with the air quality (76 / 83 = 91.6 percent). Under either scenario, the company would be technically correct to report that 91.6 percent of people report feeling good about the air quality, but the implications are vastly different. The claim that 91.6 percent reported positive feelings about indoor air quality would mean something different if the survey sample size represents less than 15 percent of the company.
Let’s assume that 500 of the 600 people in the company did take the survey and the sample size is not the issue. But what if the 500 people who were given the survey were all executives and knowledge workers who had offices on the exterior of the floor plan in a traditional office setup—you know, the window offices—and the 100 who weren’t given the survey were the administrative staff located on the interior in the cubicle farm? We’ll take it to the extreme in our hypothetical—if the 100 people in the cubicle farm all did not like the air quality, the “true” resulting percentage of people who liked the air quality could be as low as 76.3 percent if the survey had included the 100 people in the cubicle farms (458 / 600 = 76.3 percent). Quite different from the 91.6 percent that was reported.
Now let’s assume the company sent the survey to all 600 people in its company, thereby avoiding any intentional selection bias. What if the people who decided to complete the survey were somehow different from those who opted not to complete the survey? The people who decided to take the survey are what we call “self-selected”; they willingly raised their hand and asked to participate. The difference between responders and nonresponders becomes critically important to understand because we know from the epidemiological literature that self-selectors are very different from others. (You might think of the Yelp effect—the people who post on Yelp are usually either extremely satisfied or extremely dissatisfied, and they have time to post a review on a website.)
For Company A, we would want to know who the self-selected responders are. Are they all of the marketing department, executives, and building managers responsible for air quality—the people who most certainly know that Company A just invested millions of dollars into this new buildings and also know that the company is going to use these results to market its products? Despite this company’s best attempts at a representative survey, did the administrators in the cubicle farm all decline to answer the survey, so in the end the survey really just sampled those in the private offices again?
Understanding and evaluating selection bias, in all its forms, is so important that in nearly every single peer-reviewed epidemiological study, the very first table in the paper is one that shows the sample size and examines, side by side, any potential differences between responders and nonresponders. Epidemiologists do this to show that there was no selection bias introduced as a result of who ended up taking the survey. In our hypothetical, if the results are to be believed, the headline finding would have to include a similar table showing that the 500 who took the survey were similar to the 100 who did not across things like age, gender, title, salary, education, office type, and office location.
This problem can be avoided, even with a smaller sample size, through a random selection process.
Now, even if Company A addressed the small sample size and representativeness issue, there is still another potential for selection bias: it only surveyed the people who were in the office that day. In epidemiological terms, this is a type of selection bias that can arise from what is called loss to follow-up (and a kind of corollary, “the Healthy Worker Effect”).
This type of bias arises because, on average, people who are at work are different from those who are not. Those not at work may be absent because they are sick or otherwise unable to work, or because they have moved on to another job and can no longer be contacted. They are “lost to follow-up.”
Let’s say Company A did manage to survey 500 of its 600 total employees with a representative and random survey with no self-selection bias. Are the 500 people who were in the office that day the true denominator? In other words, do they represent the entire population potentially “at risk”? In addition to employees out for client meetings or conferences or vacation, what if someone else was out sick that day? Then he or she wouldn’t be included in the survey. And what if the building was the reason that employee was out sick? That is, what if you were only surveying “healthy workers.” What if a few people in the company absolutely hated the new workplace, so much so that they just quit and no longer worked there when the survey was administered? All of these employees would be lost to follow-up and wouldn’t be included in the survey either. (Instead of just calling this the “Healthy Worker Effect,” we might call this the “Happy Worker Effect,” where the only people left in the company are those who actually like the company; the disgruntled or unsatisfied having moved on.)
This problem can be avoided by ensuring that the survey captures all of those “at risk,” not just the healthy and satisfied.
This is just a hypothetical, so we could explore these issues in Fatal Flaw 1—we don’t actually know the sampling details for Company A that underpin the claim that “91.6 percent of employees say they feel healthier because of indoor air quality improvement.” And therein lies the problem.
With the basics of selection bias behind us, we want to get to another major problem with these POSs: their use to find causal associations between design features and outcomes. Those conclusions can be erroneous as a result of the potential for these POSs to create what is called dependent measurement error.3
To show you how this can be a problem, we’ll start with a hypothetical and then show you an actual example. Suppose we ask you, a happy person, how you like the room you’re in right now. You say, “It’s great. I love the air quality and lighting in here.” Then suppose we also ask you how you’re feeling. You answer: “Great.” Any headaches? “No.” Any fatigue? “No.”
Now, we turn to your colleague. You know the one. We ask him, How are you feeling today? “Terrible.” Any headaches? “All day, every day.” Any fatigue? “I’m exhausted.” Then we ask him, How do you like the building and room you’re in right now? “I hate it.” How is the air quality? “It’s terrible.” And so on. You get the picture.
This POS is really testing whether people are stoics or complainers. The stoic is likely to answer all questioners similarly—in a positive manner. And the complainer will likely do what complainers do—answer everything negatively. This is the dependent measurement error and here’s why it is so insidious. What researchers, survey analysts, or companies typically do next is put the responses of the stoics and complainers together, along with those of everyone else who makes up the middle ground and took the same survey, then they plot it out and draw a nice regression line.
Voila! You have yourself a very strong, but misleading, relationship between an “exposure” and an “outcome,” with complainers anchoring the bottom left and stoics driving the top right. This is usually backed up by fancy-sounding but meaningless phrases like “statistically significant” results that give the study some imprimatur of being robust. However, if you took the stoics and complainers out of the survey and only focused on those in the middle (the gray open circles in our figure), the figure would no longer show any relationship between the two variables.
This is called dependent measurement error; the measurements of exposure and outcome are dependent on each other. The assessments of exposure and outcome are not disentangled. That is, they are not independently assessed.
The issue? The ensuing analysis purports to show a relationship between two factors when actually what has been “discovered” is that this company, like all companies, has some stoics and some complainers. The implication? Companies then report these spurious “findings” and executives may make decisions about their company and buildings based on them.
This problem can be avoided by using an objective measure of exposure (for example, measuring air quality in the environment), an objective measure of an outcome (for example, cognitive function tests), or objective measures of both.
The higher-order, fundamental flaw we just examined is that POSs are subjective and only rely on human perceptions. This makes them prone to bias and dependent error. The solution is to track independent, objective measures of performance across an array of indicators. Businesses have been doing exactly this for decades. Now we just need to apply these measurement techniques to buildings.
Businesses track KPIs every second, every day, every week, every month. They track things like revenue and return on equity; earnings before interest, taxes, depreciation, and amortization (EBITDA) and net profit margin; and operating cash flow. But if we want to capitalize on the 90 percent cost of our buildings—the people inside them—are traditional KPIs the right way to go about it? The short answer is no. Using traditional KPIs has led to the mismeasurement of “people businesses,” as shown by Barber and Strack.
“Measuring what matters,” to our mind, means measuring health performance. The rationale is straightforward—if people constitute the vast majority of your business expense and productivity, and their health is a key determinant of their ability to be productive, then the most “key” KPI is health. So, as Joe and his colleagues argue in a recent article, companies need to start being intentional about how they measure the health and well-being of their employees. This means measuring Health Performance Indicators (HPIs),4 and it goes way beyond using POSs.
The HPI concept is all about tracking the factors that can be leveraged to optimize the building for health and performance. In this book we focus on how the HPI concept can be applied to the building, but it can be extended to the entire business enterprise. (You might think of other factors in a company that influence worker health and the bottom line but aren’t building related: company culture, maternity and paternity leave, autonomy, salary, purpose, and other health-promoting activities not linked squarely to the building. A “toxic” or adversarial work culture can have a significant negative impact on health, as can poor sleep, stress, and long hours.)
For now, we will stay focused on buildings and will populate our framework with new HPIs that we think all companies should consider tracking that relate to their building. Because really, after you have spent so much time, effort, and money sorting through candidates to find the best and the brightest—the internally motivated and highly skilled—wouldn’t you want to create the optimal working environment to maximize the performance of your investment?
In creating this HPI framework, we adopted, or rather co-opted, the language of KPIs so that we could use terms and concepts that would be very familiar to the business community and therefore easy to implement. As with KPIs, there are leading HPIs (“before impact”) and lagging HPIs (“after impact”); some are direct indicators of health (that is, they measure the people) and some are indirect (that is, they measure the building). A nice way to visualize this is to split the HPIs into quadrants.
In their original research paper that briefly touched on HPIs, Joe and his colleagues populated this framework with some examples. For this book, we have relied on our presentations, workshops, and conversations over the past two years with executives across various industries (for example, commercial real estate, tech, and pharma) and across various functions in their companies (for example, Human Resources, C-Suite, and facilities) to populate the framework with some new HPI ideas. (HPIs will necessarily be different for each company, particularly the direct indicators on the top half of the framework, but the ones on the bottom related to buildings are universal.)
Let’s start with the top left quadrant and work our way around counterclockwise.
At the end of the year, businesses can track several metrics to understand how health performance as a result of the building may have been affected that year. This includes tracking gross-level trends on things like total employee sick days, health-care utilization, and specific illness trends, such as an uptick in asthma attacks or influenza cases. Importantly, the key to determining whether these represent potential building-related issues is what’s written in the box at the top center—you have to analyze and benchmark results against normative spatial and temporal data (this is known as spatiotemporal benchmarking). What the heck does that mean? Put more straightforwardly, companies should track these indicators by looking for differences over space and time, both within and outside their organization.
For an example of how analyzing these types of HPIs can lead to actionable information, take the recent investigation led by research associate Jose Guillermo Cedeno Laurent on Joe’s Healthy Buildings team.5 He analyzed health record data from university students living in different buildings and, simply by stratifying the results by building on the campus, found that students who lived in one upperclassmen building on campus had strikingly lower rates of allergies, year after year, over a five-year period. The health data was a clue that something was different in this building. But what was it? The value of analyzing the HPIs in this upper left quadrant was that it tipped us off that there might be something interesting in this one building. Because of what we saw in the health data, we did a follow-up investigation. It turned out that this building was the one in the study with mechanical ventilation, supplying filtered air at higher ventilation rates. (Surprise, surprise.)
Just as one KPI does not tell you everything you need to know about a company, the same holds true for HPIs. But this group of HPIs, taken together, can provide a strong indicator, using data most businesses already collect, of direct impacts of the building on health.
A 300-person services firm operating out of a newly renovated space on the outskirts of a major US city had a process for formally monitoring employee illness trends. The building was originally part of an old industrial complex that had been newly renovated and rehabbed as office space, with beautiful high ceilings, tall windows, and an open floor plan in some areas with interesting second-story office and meeting spaces that looked out over the main hall. After reviewing the illness trends in one year (lagging and direct HPIs), the company noticed something unusual—two of its longtime employees who worked on the same floor had been diagnosed with Bell’s palsy, a weakening in your facial muscles that only occurs on one side, causing half of your face to droop. The etiology of Bell’s palsy is unknown, but there are several hypotheses, including viral infection. There is also some evidence that environmental factors are a risk factor, including exposure to volatile organic compounds (VOCs).
Concerned, the company opened a formal inquiry to dig deeper into the potential problem and, in the process, learned of two more Bell’s palsy cases in its workforce in the same time period. It hired an occupational physician and epidemiologist who, as we suggest in the top middle box in our HPI framework, compared the incidence rate within the building, across buildings, and even with the general population using national disease incidence data (that is, spatiotemporal benchmarking). The epidemiologist confirmed that this rate of Bell’s palsy in a workforce that size was outside the bounds of what could be expected as a result of chance alone. Based on this finding, the firm initiated an environmental investigation led by Certified Industrial Hygienists, who discovered that there was a plume of VOCs in the groundwater below the building. Solvents had been dumped onto the land many years earlier at an adjacent building, contaminating the water below, which spread into a plume that now reached under this newly renovated building. Testing of the indoor spaced confirmed that VOCs from the groundwater under the building were permeating up into the new building. (This is not that uncommon, and it is called vapor intrusion.) The fix? Several tweaks were made to the mechanical system to help keep the building positively pressurized (a negatively pressurized space acts like a vacuum and sucks the VOCs into the building), and a sub-slab vapor intrusion remediation system was put in place.
Moving down to the lower left quadrant, we get into the realm of indirect measures of health. (You can ignore the box labeled “The Pulse” for now. It’s so important that we’ll dive deeply into that after we work our way around the HPI quadrants.) In this quadrant you’ll see a few indicators that businesses may also track at the end of the year or end of the month—indirect measures of health performance, such as tracking employee perceptions (done right!) of the building and air quality, or after-the-fact observations about the building or unusual events (unusual odors, systems failing unexpectedly).
Consider two related HPIs here—space utilization and time spent at office—and think how this might play out in companies with work-from-home models. Despite the relatively recent increases in the number of companies moving to such models in order to save money on real estate, many companies are pushing back on that philosophy and promoting collaboration through more face-to-face interaction at the water cooler. Take IBM, which in 2009 went to an aggressive work-from-home model and then pivoted to a full reversion of that policy less than a decade later.
What does this have to do with HPIs? If your goal is like that of IBM (or other companies that want people in the office, such as Google, Apple, Aetna, and Yahoo), then you definitely want to be sure that the building you’re making your employees move back into is one they’ll be happy to be in; otherwise, you run the risk of losing them. How might you find out how effective your enhanced building is at bringing workers together to collaborate? Track and measure an HPI like how much time people actually spend in the building, and see whether this varies across different buildings or before or after a Healthy Building intervention. If you like your office, or feel more productive there, chances are that the amount of time you spend there will go up. (If you’re thinking, sarcastically, “Yeah, people love being tracked this way,” you might consider that this is already happening, just not so overtly. Every time you log into your computer, the company knows where you are, just as it does every time you send an email. More than knowing when you are there, it even knows where you are in the building; as you move throughout your building during the day, the phone in your pocket is constantly pinging the Wi-Fi, so you are being tracked every minute of the day.) This type of data can be used to understand what spaces are working for your employees and what spaces aren’t, letting you prioritize your next renovation.
In late 2008, the US Consumer Product Safety Commission began receiving reports from homeowners and builders about something unusual going on in Florida. People were noticing that air conditioners and other appliances in newly built homes stopped working after a few short months. Replacement appliances failed just as quickly. Upon inspection, they noticed a dark coating on the cooling coils of the failing air conditioners and a similar dark coating on other metal surfaces—even on their jewelry. The issue with the appliances was accompanied by a rotten-egg smell in the home.
Within four years, the Consumer Product Safety Commission had logged nearly 4,000 reports across 43 states, the vast majority of which occurred in Florida. Early signs pointed to defective drywall as the culprit. (The problematic building product, it turned out, had all been sourced from vendors in China. Thus, the problem product and resulting issue in homes came to be known colloquially as “Chinese Drywall.”) The commission launched its biggest and most expensive investigation ever to identify the root cause of the problem and find remediation solutions. A 51-home investigation, led by Joe, Jack McCarthy, and the team of consultants at Environmental Health & Engineering, used a combination of air-sampling techniques and the placement of “corrosion classification coupons” in the houses. We determined that the drywall used in this new construction was emitting hydrogen sulfide into the homes.6 Hydrogen sulfide is highly corrosive to copper and silver—thus the dark coating on copper and silver surfaces (technically copper sulfide and silver sulfide)—and it’s known for a rotten-egg smell and very low odor detection threshold (in the parts per trillion).
Additional work by Lawrence Berkeley National Laboratory, using small-scale chambers to test emission rates of chemicals from the defective drywall, confirmed what was found in the homes—hydrogen sulfide and other reduced sulfur compounds coming off the drywall.7 They also found that these emission rates increased with temperature and humidity.8 Once the problem was identified, the main challenge in remediation was, How do you determine where it was used in each home? (Painted drywall looks the same whether it is problematic Chinese drywall or nonproblematic drywall.) Our subsequent study led to a way to “see through the wall” and identify markers of Chinese drywall using a slick real-time forensic fingerprinting technique (portable X-ray fluorescence and Fourier-transfer infrared spectrometry, in case you were curious). A follow-up health investigation by the US Department of Health and Human Services concluded that the people in houses with problematic drywall could have experienced adverse health effects from the hydrogen sulfide, most notably exacerbation of preexisting respiratory conditions, eye and nasal irritation, and nasal tract lesions.9
The takeaway from this case is that oftentimes the first indication that something is potentially wrong in your office, home, or school is a noticeable change in building performance (for example, failing systems, corrosion, or damaged walls). In the Chinese Drywall case, the forensic investigation was aided by a unique feature—the failing systems and appliances were caused by a chemical that was pungent. If it happened to be caused by a chemical with no odor, the mystery of the failing appliances would have taken longer to uncover, while people would be breathing in whatever was in the air.
The lower right quadrant is the most critical quadrant when we are talking about HPIs related to buildings. This is where a company, building owner, or manager can have the greatest impact on the health and productivity of employees, and therefore on the business. And because these are leading indicators, the business can be sure that it is getting the benefits from the building immediately, rather than waiting for a problem to arise and only addressing it after negative impacts have started accruing.
Let’s start with the most important first step in a Healthy Building life cycle: building design. Many of the 9 Foundations of a Healthy Building can be built into the DNA of a building right at the beginning in the design stage. Want higher ventilation rates? Design for it. Want healthier building materials? Spec them. Want higher-efficiency filters? Buy them. In short, if you want a Healthy Building and the economic benefits that come with it, the best thing is to design for it.
Then, after you design the building for health, make sure you are getting what you paid for by commissioning the building. Designing a building, building it, and then not testing it is akin to buying an airplane and putting it in service without first giving it a test flight. No one would want to get on that first flight, and no one should want to be the first one in a new building that hasn’t been fully tested either. Commissioning is a “test flight” for your building. (Fine, the analogy isn’t perfect because an untested building doesn’t run the risk of immediately killing its occupants. But notice we used the word “immediately” …)
To extend this imperfect analogy, you also probably wouldn’t want to get on an airplane that had a test flight but then was never checked again. That’s why ongoing commissioning of your building is recommended, not just one-time commissioning. Ever wonder why flying is the safest form of transportation? It’s because health and safety have been built into the heart of the industry. Airplanes get an “A Check” every 200–300 flights. It’s reasonable that your building should get similar checkups. By this point in the book, we hope you are motivated by the health performance benefits, but just in case, commissioning also comes with considerable energy savings—a study from Lawrence Berkeley National Lab found that commissioning can yield energy savings between 13 and 16 percent.10
Additional HPIs in this quadrant focus on ensuring that the building is meeting preset conditions, like building certification prerequisites, a safety and security plan, following green cleaning procedures, and using integrated pest management techniques. By tracking and measuring these, the business is controlling nearly everything it can with regard to building performance. Health is built into its DNA.
Kaiser Permanente, a US-based health-care company with over 200,000 employees, pays a lot of attention to the health of its patients and of its staff. It also pays a lot of attention to what goes into its buildings, of which it has many—over three dozen hospitals and over 600 medical offices. In 2006 it started examining the evidence supporting the use of antimicrobials in its building materials (that is, it was interested in healthier material selection). Conceptually the use of these chemicals might appear to makes sense; it seems logical on first glance that a hospital would want its walls and flooring to have antimicrobial properties.
But it turns out that what might on the face of it seem logical—the desire to have antimicrobials embedded in finishes, fabrics, and just about every high-touch surface in a health-care facility—was simply not supported by the science. A number of studies had shown that it was just as effective to use soap to wash hands as it was to use an antimicrobial soap, that there was no evidence that using these chemicals in surfaces and finishes made patients healthier, that their overuse came with the unwanted effect of promoting antibiotic resistance, and that many of these chemicals, most notably triclosan, were actually harmful to human health. (Triclosan, like a few of the chemicals we discussed in Chapter 7, is a halogenated chemical with two phenyl rings that interferes with thyroid hormone function and reproductive success.)11
What did Kaiser Permanente do? First, it issued a recommendation that these chemicals not be used in its buildings. Then it banned triclosan outright because of its known human toxicity. And finally, most recently, it banned a whole host of other chemicals widely used in other buildings and hospitals from use in surfaces in its facilities.12 The result? Healthier buildings, because of a reduced toxic load from unnecessary antimicrobial chemicals. It raises the question, If Kaiser Permanente, one of the largest health-care providers in the country, has deemed it unnecessary and even harmful to use these chemicals in its facilities, why do you have them all over your office building and home?
The bottom right quadrant is where every business should spend its energy and focus to ensure that its building is being leveraged for the health and performance of its employees, but we recognize that the top right quadrant is where everyone thinks they should focus their attention. In an ideal world, wouldn’t we all want clear, leading indicators of employee health and performance? We could get there, sure enough, by requiring employees to take periodic cognitive function tests and participating in measuring real-time biometric data to track their personal health. But we’ll let you in on a secret: that’s what academics do so you don’t have to. We have already done the studies, wiring everyone up to collect biometric and cognitive function data and then assessing how indoor environmental factors influence human physiological performance. That’s how we know that everything in that bottom right quadrant is important to measure and track.
That said, there are some things in the top right quadrant that are worth exploring. The most important of them is “employee experience,” which can be summed up as, “Listen to your employees—are they happy or upset?” Ask your building manager about the temperature complaints he or she gets every day and you’re likely to get an eye roll and a snide comment about how some complainer-type employees are always unhappy with this or that. But dismissing these complaints is an economic mistake, as we showed in Chapter 6. A better approach is to empower your facilities manager to think as if he or she were in the health-care business and to treat these complaints and reports as vital to the company’s success.
Last, as we move into the world of personalized health, sometimes called mHealth for “mobile health,” we will be able to track, monitor, and support employee health performance in real time through the use of smart phones and wearable technologies. Researchers can now use moment-by-moment data from the sensors in phones to understand behaviors, social interactions, speech patterns, physical activity, and more, in what our Harvard colleague J. P. Onnela has coined as “digital phenotyping.”13 But we dare you to tell your employees that you’re going to digitally phenotype them and analyze the tone of their social interactions! Our guess is you would see a sharp decline in positive sentiment correlated with the timing of the announcement. That said, keep an eye on this quadrant as new AI-enabled analytical approaches and smart building sensors and technology start being adopted in the building community. Our guess is that digital phenotyping and sentiment analysis at scale is not far off.
The top engineer at a large multinational told us a story about something that happened on a floor in one of its buildings. That particular floor had about 30 employees, each of whom earned a six-figure salary. Several workers in the space started reporting sick building symptoms, such as headaches, fatigue, and difficulty concentrating. At first it was dismissed by managers who thought these were “complainer-type” employees. Then the problem escalated, as more and more employees on that floor joined in the chorus and many began calling in sick and refusing to work in the building. Executives in the firm noticed this uptick in absenteeism. When the top engineer was summoned to determine whether the building was the potential cause, he inspected the mechanical system and found that the motor for the outdoor air damper for the building had failed, causing the damper to be stuck in the closed position. In other words, no outdoor air was coming into the building. The motor for the damper was quickly fixed after this discovery, and the negative reports immediately stopped. Problem solved.
The downside was that some of the damage was already done: 30 employees with a combined salary of well over $3 million annually ($250,000 for that month) had been distracted and disabled for an entire month, and several had refused to show up for work. The upside is that the firm caught the problem before it went on for multiple months or years—or someone got seriously sick.
It turns out that the real-time monitoring of the employee experience, when combined with what we’ll talk about in the next section—real-time air-quality monitoring—would likely have caught this issue even earlier.
So far we’ve conveniently ignored that big box in the middle of our bottom two quadrants. That’s because we wanted to save the best, and most important, for last. Buildings, like the human body, change every minute. So it’s absolutely critical to have a mechanism in place to constantly check the pulse of the building.
If you want to take the pulse of the building, and (indirectly) of your employees, you need to do environmental monitoring; it’s your first line of defense if you want to be certain that your building is operating as it should. Without a monitoring program in place, you’re flying blind and it’s highly unlikely that you are tapping into the full potential of your employees.
Think about how we typically take the “pulse” of our buildings today. Most buildings get a one-time stamp of approval at the opening. A great example of this is the LEED plaque on the wall at the Landmark Center building at the Harvard T. H. Chan School of Public Health where Joe works. Yes, it may very well signal the performance of that building when it was first opened, but is it realistic to assume that this building, certified 16 years ago, still performs that way today? Of course not. Yet that plaque remains on the wall, purporting to tell all who enter about the credentials of our space. Would you assume that your car, or your laptop, or your furnace at home continues to perform like new for years (or decades)? No way. You should have the same skepticism when it comes to buildings. A commercial or institutional building is a very complex machine, and it needs attention in ways that are not always obvious: you can’t really tell that something is wrong in the way you can with a leaky radiator or flat tire.
Fortunately, a key market shift is under way. Thanks to advances in sensor and Internet of Things technology, we can now keep the pulse of a building as never before. We are quickly shifting from static to dynamic: indoor environments that we can monitor and track continuously, and buildings that can react in real time, too. Make no mistake, there is massive potential here, because for the first time ever we will be able to monitor and influence indirect HPIs: all of those factors that help determine how the 90 percent costs of our buildings, its people, can best perform.
Here are two quick examples to show you why monitoring environmental performance is so important, and to rebut the argument that “buildings are set in stone.”
Take this recent example, where we monitored environmental performance of a newly renovated office space housing a group of (expensive) knowledge workers as part of our global study of workers in office buildings. For background: By all accounts, when you walk into the space, it is clean, welcoming, neatly designed, and managed by a top-notch facilities team at a high-profile organization. Nothing would suggest anything is “off” in this space. In short, it’s a place where you’d want to work, or you’d want your son or daughter to work. Well, we started monitoring this space. Take a look at the data for airborne dust (PM2.5).
The first thing that should jump out at you is the difference in concentrations between work hours and nonwork hours. Between the hours of 8:00 a.m. and 7:00 p.m., the indoor particle concentrations are much lower than the early morning, evening, and overnight. The second thing that you might have noticed is that the levels indoors are frequently quite high. (For reference, the acceptable level for outdoor air, codified in the U.S. National Ambient Air Quality Standards, is 12 ug / m3.)
Taking the pulse of this space with real-time monitors shows that something is amiss—the level of indoor airborne dust in this newly renovated office is very frequently above 12 ug / m3, and there are significant changes occurring throughout a 24-hour period. Because these particle levels are not visible to the naked eye, the only way we knew about this issue was because the monitoring tipped us off. So, what is happening here?
This is a building in Chengdu, China, where the outdoor PM2.5 concentration on the day of our sampling was about 40 ug / m3. Why, then, are indoor concentrations in this building so low during the day? We explored this, and lo and behold, we found that the filters used in this building were MERV 14, which has a very high capture efficiency against PM2.5. (You may recall our discussion of MERV efficiency in Chapter 6 that showed a MERV 8 filter has a PM2.5 capture efficiency of approximately 50 percent. A MERV 14 has a capture efficiency around 90 percent.) The cost differential of upgrading the filter? Twenty dollars. The cost of the “energy penalty” for the pressure drop because the fans work a little harder to push air through a tighter filter? A few bucks a year. Compare that to the cost of the potentially acute health effects for PM2.5 for the ten employees in this space, breathing the air day in and day out, at levels above the National Ambient Air Quality Standards, in a building owned by a high-profile organization.
Now what about that other interesting part of this figure: the differences over the course of the day? By this point in the book you likely guessed why the pollution levels are high outside of working hours. The building mechanical ventilation system starts at exactly 8:00 a.m. and shuts off at 7:00 p.m. Measuring the pulse of the building with real-time sensors made the invisible visible, revealing just how much the building system was protecting the health of workers in this company during the day. And when that system is off, or if employees work into the evening, the indoor air starts to look a lot like outdoor air. This is a great example of an avoidable risk, made possible by measuring the pulse of a building and implementing a simple, cost-effective filter intervention.
Now, if you are reading this in the United States or Europe and think this example doesn’t apply to you because “our outdoor air pollution isn’t that bad,” think again. Yes, air pollution levels may be lower in these place, but still, in California there are close to three thousand premature deaths per year attributed to PM2.5 alone. And in Europe nearly half a million premature deaths are attributed to outdoor air pollution each year. Is your building protecting you? The only way to know is to take its pulse.
Buildings change day to day, and hour to hour. Take, for example, this real-world data from a building in Los Angeles, overlaid on the classic psychrometric chart we mentioned in Chapter 4.
The details of psychrometry go far beyond the scope of this book, but for our purposes, what you need to know is that it essentially defines the relationships among temperature, humidity, and moisture, which then allows us to figure out the “sweet spot” of thermal health. In the figure here, we show the psychrometric chart and that sweet spot, as defined by Standard 55.1 of the American Society of Heating, Refrigerating and Air-Conditioning Engineers; this is the zone where 80 percent of people report being “comfortable.”
Here’s why we introduce this. This is real data from one commercial office building, where each blue dot represents the conditions at a worker’s desk. You can quickly see that in the figure on the left, everyone is in that sweet spot, but in the figure on the right, things have changed and nearly everyone has migrated out of that sweet spot. The temperature has dropped down below the point at which it is comfortable. This may be what is happening in office buildings where employees regularly complain about the cold.
Now consider this—the two graphs map out data points that are one day apart! That’s right, even in this high-performing, Class A office building, with no discernible changes to how the building was operated day to day, there were big differences in temperature and humidity. This figure represents a day of diminished productivity: all of those blue dots out of the sweet spot on day 2 represent top-line revenue and bottom-line profits walking out your door.
Showing this figure also serves another purpose. The only way to “see” this happening in your building is by monitoring for these factors in real time. Active monitoring reveals that people are frequently working in impaired conditions that diminish their potential to be productive. Very often they don’t even perceive it—and if they do say something, their comments are generally discounted.
If you’re not constantly keeping the pulse of the building and proactively responding, then the way you will find out about these issues is when some of the blue dots place a call to your facilities team, or email their manager. And that’s if you’re lucky. What if it takes three or four days for these complaints to roll in? That’s three or four days of low throughput, as we showed in the section on thermal health in Chapter 6.
Let’s go back to Health & Wealth Inc. to explore the economic implications here. Recall that this 40-person hypothetical company had a fully loaded average salary of $75,000 per year. Assuming employees at this company work a typical 250 days per year, the company is spending $12,000 per day on payroll. In Chapter 6 we presented data from a study showing that there was a 1 percent loss of productivity per 2°F temperature change outside typical “comfort” ranges. The figure in this chapter conveniently shows about a 4°F change (2°C), on average, which would correspond to that 2 percent decrease in productivity.
Putting that all together, this slight change in temperature could be costing the company an estimated $240 per day in productivity (2 percent of $12,000). You might be thinking $240 isn’t much. Even $240 multiplied by those three or four days may not seem like much. But what if we now told you that this temperature issue lasted for the entire month? Now it’s $240 times 20 working days, which costs the company $4,800 that month. And if the problem continues for a full year? The total grows to $57,600. Worse yet, what if instead of your company being 40 people, you had a company of 400 people, or 4,000 people? This slight change in indoor temperature can become a multimillion-dollar hit to your bottom line.
Now, imagine you deployed real-time monitors. You would capture this change immediately. Your team would respond before employees started complaining. You’d then save a day’s worth of lost productivity capacity.
In each of the six real-world cases we have given you, there was no initial indication that anything was wrong in the building. These were successful companies in beautifully designed office spaces that, to the naked eye, seemed like ideal environments to work in. Without tracking HPIs, in each case, the company would have been blind to important building-related issues affecting its employees.
What’s next? The goal of Part II is to help you define and operationalize a Healthy Buildings strategy. In Chapter 6 we introduced the 9 Foundations of a Healthy Building. We also gave you some practical guidance for things you can do in your building right now that will put the building to work for you and affect your bottom line. All of this is supported by hard scientific data and is evidence based.
In Chapter 7 we looked at how the products we put into our buildings can influence our health. And in Chapter 8 we discussed the current Healthy Building certification systems available on the market. In this chapter, we looked at how (and how not) to measure and track the health performance of your building. In other words, how do we go about verifying that our spaces are continually optimized for health and wealth?
In the closing chapters, we will consider Healthy Buildings in the context of energy, air pollution, climate change, and public health (Chapter 10), and then we will look at the future of the Healthy Buildings movement (Chapter 11). We urge you to stay with us here for this reason: we will explore critical topics like how new technologies will impact market performance, how buildings impact society and the environment, and how this all impacts you and your business.