CHAPTER 5
Unifying People, Process, and Technology

“It doesn't make sense to hire smart people and tell them what to do; we hire smart people so they can tell us what to do.”

— Steve Jobs

In order for an organization to be a leader in analytic maturity and create a competitive advantage, you need to develop an analytics strategy. An organization cannot just throw technology at the problem and expect to “win.” It should be a strategy around four pillars. The pillars are around data, people, process, and technology and we covered each of these pillars in the previous chapters. Each one of these pillars will play a crucial role when executing your strategy. The order in which you address each of these pillars is based on your organization and the support you have from senior executives. Ultimate success is executing on all four pillars and is a journey which may take years to complete.

In the 2000s, healthcare work was usually completed in siloes. Figure 5.1 shows the traditional silos or domains organizations typically worked in, and healthcare institutions are no exception. The operational silo is where business intelligence usually existed. They were responsible for data mining, querying, reporting, delivering decision-making dashboards around various business needs, from utilization and scheduling to throughput and productivity. The clinical silo housed caring for the patient, from personalized medicine to chronic disease management to clinical risk stratification. The financial silo is where financial planning, budgeting, service line management, and business planning existed. Each of these silos may have had their own tools, technologies, and data warehouses to accomplish the tasks of turning raw data into insights and getting actionable information to decision-makers.

Illustration of the traditional silos or domains organizations typically worked in: the operational silo, clinical silo, and financial silo.

Figure 5.1 Traditional Work Silos

Source: Author.

In this constantly changing healthcare landscape, the work that used to be housed in those traditional silos does not vanish and continues to get more complex. There is still work that belongs in those three domains driving operations and continuity in healthcare. Figure 5.2 illustrates how the work in the traditional silos of clinical efforts, operational management, and financial responsibility becomes more integrated and requires the three traditional silos to work together as the complexity of caring for patients transitions into population health management activities.

Illustration describing how the work in the traditional silos of clinical efforts, operational management, and financial responsibility becomes more integrated and requires the three traditional silos to work together.

Figure 5.2 The Work Is Converging

Source: Author.

This converging work requires changes to culture. Whereas traditional work silos created a sense of ownership for their work, a new sense of ownership is now evolving, one where the mindset changes from individual ownership to group ownership. When you think about care path adherence, it is operational ownership creating the necessary operational workflows in the systems and the clinicians creating comprehensive treatment plans. Both have ownership with their respective expertise, but need to own the work jointly to ensure efficiency and success. Operational and financial metrics cannot be measured independently. As emphasis on care delivery and payment models develops, it requires a confluence of operational metrics and financial metrics to succeed in these new models. Clinical care coordination brings together clinical and financial owners to help solve limited resources on both sides to make sure we are able to treat all patients. All three of the silos or domains eventually unite in the body of work called population health management. There is no one single owner. It requires coordination between clinical, operational, and financial owners to improve healthcare outcomes. Data, people, process, and technology needs to create a comprehensive platform where clinical, operational, and financial data are connected seamlessly for enterprise consumption. Your enterprise data platform along with the right analytical tools will create opportunities for clinical, operational, and financial efficiencies. Some of the characteristics of a successful population health management program are leveraging data and analytics to produce real-time or near-real-time insights for clinicians and operations for specific patient segments, addressing care gaps and throughput. In addition, disease management solutions are able to manage and track costly chronic conditions, promote well-being, and successfully manage growing financial risk–based contracts and bundled payment models.

What does all this have in common? It is a culture change. It is about leveraging data, people, process, and technology daily to get a deeper understanding of the health of populations we manage. It is where analytics is at the core of keeping patients healthy and treating those patients who are sick or have chronic diseases. It is capturing, integrating, and interpreting data from all available sources. It is about predictive algorithms that surface opportunities for intervention and prevention. There is plenty of hype and excitement that artificial intelligence (AI) will solve all of our healthcare issues. We are using AI as if it were a verb or an action that can be easily taken. As we continue to solve problems facing healthcare daily, we seem to say, “Let's AI it,” as if we can just throw some algorithms at the problem and it will vicariously solve our healthcare issues. We would like to think it is the thoughtful use of augmented intelligence, that is, human judgment and knowledge combined with AI to improve clinical decision support, operational planning, and financial outcomes. It requires new skills, knowledge, and talent pipelines for our existing and future analysts and leaders. The promise of AI is high but not everything is best solved with AI.

The culture change is about valuing data as an asset, collaboration between teams, or a team of teams approach to prioritize resources and projects that will use analytics to drive positive return on investment. Cultural change also means having leadership that is passionate about using analytics daily and developing analytical talent throughout the organization. It means we will take many approaches with trial and error to achieve success.

First, data will be treated as an asset. The data hygiene cycle will create “clean” data through two avenues. The first avenue will be around the development of enterprise standards, that is, cataloging, metadata management, and development of a business glossary. This ensures that all of an organization's caregivers understand the technical information surrounding the data, from the data definition to the source to calculations to data reliability to how the data should be applied. The second avenue will be a three-prong approach using information security, data quality, and master data management. Information security will make sure the data is accessed according to HIPAA compliance standards. Data quality will ensure rules-based checks are made to ensure only cleaned data that passes is available for analysis. Data stewards will focus on tactical processes to correct the data at the source. Master data management will create the “golden” records for critical domains identified by your organization. Examples of these critical domains are patients, providers/physicians, location and reference tables. This will provide a single point of reference for these critical domains and ensure consistency in analytic endeavors. Scorecard dashboards will sit on top of this three-prong approach to help the organization understand the data's grade.

Second, we will take a team-of-teams approach to solve problems. The team-of-teams approach is a cultural shift that combines adaptability and agile using small teams to harness the power of your organization. It's about the continuous cycle of changing and assessing and changing rapidly. This helps teams realize there is no perfect solution, since none exists, and delivers minimally viable projects that are enhanced over time. The book Team of Teams by General Stanley McChrystal is a great read for those looking to get a more thorough understanding. We will leverage sponsors who will be engaged in using analytics and who see the value added in using analytics, whether it be clinical predictive algorithms to help produce positive outcomes, operational forecasts and simulations to optimize operational metrics, or financial predictive models to create long-range financial plans. The sponsors will be a positive voice and commit to sharing success with the organization and help create opportunities for others. This team-of-teams approach ensures collaboration as a shared effort. The analytics and technology teams, along with the team associated with the sponsor, will work hand-in-hand from data to intelligence to action. The collaboration will make sure we're focused on delivering the most valuable work and service to the organization. We will bring the end users closer to the process. Everyone involved will have accountability.

Third, leadership will play a lead role. The characteristics that will emerge are their support and awareness in building analytical capabilities, and that they demonstrate fact-based analytical decision-making, challenge others in the organization to follow their lead, and are not afraid to push the limits. Building analytical capabilities does not require you to “know the math”; it requires awareness of techniques and tools that are being used to solve particular business problems and the limitations they present. Leaders will demonstrate their fact-based analytical decision-making using outputs from the analytical projects they commission or those that are created by the analytics team. As stated by W. Edwards Deming, cited in Chapter 1 of this book, “Without data, you're just another person with an opinion.” Leaders using these analytics will challenge those who question if using these new analytics is worth the change management involved or those unsure if a project would benefit from these new analytical capabilities. Leaders should push the limits of the analytical team. They should challenge the team to be innovative and create new analytical ways to solve problems knowing that experimentation will also have some failures. They realize that failing quickly can often lead to new innovation and breakthroughs.

Fourth, the organization needs to be committed to developing analytical talent. This doesn't mean you go out and hire PhDs or talent with advanced degrees. There are three ways to accomplish this. First, yes, you will hire some really smart people, either by creating positions or hiring them through attrition. Second, you need to invest in your current workforce. This is through internal and/or external tools, education, and training. Third, you will work hard to make analytics intuitive through better visual tools, representations, and data exploration approaches and will commit to educating and training caregivers in these approaches:

  • Building analytics libraries for them to use.
  • Creating learning opportunities through internal analytical training classes covering a wide variety of analytics techniques that are used consistently in your organization by those analytic subject matter experts.
  • Giving them access to third-party learning.

Fifth, you need to develop new talent pipelines through enhanced internships and updated job descriptions. The healthcare industry is lagging in this area. Creating internships where students come in and participate in analytic solutions gives them a bond to healthcare. This can create a positive feedback loop when the students return to school. Creating meaningful work is the best word of mouth on college campuses. Another tactic is to create updated job descriptions. As colleges and universities continue to create analytics and data science majors, the organizations that fail to keep current job descriptions will lose out on talent acquisition attempts.

The following four case studies will highlight the cultural change discussion above. The case studies will be dealing with each part of unifying people, process, and technology individually and then a final case study will talk about each part in synchronization. This is not to say that each case study could not address unifying people, process, and technology. The goal is to focus on each part individually to understand its importance. Throughout each case study we will describe the problem being addressed and we will give a few facts about the models used and the size, but it is not about the model itself. The first case study will discuss when people are important. When we speak about people, we are talking about the sponsorship and the role they play that leads to a successful outcome. Our second case study will focus on the process. During this case study we will talk about how the analytics professional can utilize the analytics playbook and how an advanced analytics checklist can help scale using analytics for your organization. The third case study will discuss what technologies might come into play in order to create “real-time” analytics. The final case study will give an example of when people, process, and technology are harmonious toward the quest for intelligence.

PEOPLE USE CASE – DELIVERING PRIMARY CARE PREDICTIVE RISK MODEL

Reducing costs of delivering healthcare is probably the number-one challenge that is being addressed throughout the healthcare ecosystem. Efforts in many types of population health initiatives, from primary care services to chronic care delivery to reducing prescription drug costs, focus on reducing utilization without rationalizing, and costs to continue delivering the best, most cutting-edge healthcare in the world.

The undaunting task facing every provider organization is not caring for their patients; it is the magnitude of patients requiring healthcare services. Trying to balance patients who need chronic care services, like cancer or asthma, patients who just need services because they are currently fighting a cold or flu, and patients who are just seeking well-care visits, or who are not in immediate need of healthcare services because they are young and consider themselves healthy, results in challenges to optimizing physician resource time. Understanding the risks of their patient population is one way the Cleveland Clinic is addressing this challenge. Through true population health management efforts, Cleveland Clinic created a primary care predictive cost risk model to help improve patient care. The expected outcome of the model would be to assign a risk score to each patient to help physicians, nurses, and care navigators with four important functions: (1) help prioritize patients who have care gaps, those services that are critical to maintaining proper health, like A1c, blood pressure, mammograms, etc.; (2) identify high-risk patient lists to schedule patients for visits to try and keep them from developing costly healthcare complications; (3) schedule patients who are healthy for well-care visits to help keep them healthy, or catch potential healthcare issues before they develop; and (4) provide a score so that when that patient comes for a physician visit, the physician would understand the relative risk for that patient to incur substantial costs for healthcare services in the future.

There are industry algorithms to calculate a risk score for patients who are associated with some type of population health total cost of care model. In these situations, it is partnering with the health insurance companies to understand the claims data from all the different health providers a patient utilizes. The problem Cleveland Clinic was trying to solve was calculating a risk score for all patients, regardless of insurance status. This meant creating a credible model that would score more than 1.8 million patients currently and that number of patients is expected to continue to grow. Over three years of data were used to train the model. More than 300 different variables, from claims data to electronic medical records (EMRs) to Hierarchical Condition Categories were used. We are currently working on a new version of this model. As we discussed earlier in the book, the amount and types of data continue to grow and that logically leads to the exploration of model improvements. We digress; back to understanding how the people play an important and significant role.

In order for this model to be considered credible, our population health team addressed the importance of every patient having a risk score. Once the decision was made that every patient was to receive a risk score, there were a number of decisions regarding where to place this risk score in the clinical workflow and how it would be used. For all of these types of decisions, we needed a clinical sponsor as well as physicians, nurses, and care navigators underneath the clinical sponsor. This clinical sponsorship team would provide two important matters: (1) clinician validation that the model is performing to high credible expectations and (2) where to embed the risk score in the daily clinical team's workflow that would allow actions to be taken to improve the quality of care and reduce overall total cost of care.

Clinical validation is an important step in the model building process as well as model acceptance. We can build a model using a variety of techniques from random forest to stepwise regression to neural networks to ensemble modeling. None of these techniques matters unless we procure clinical validation behind them. Just letting analysts loose to build models using claims data was fruitless and would lack of support of any risk score the model would deliver. In our discussions with physicians, the physicians agreed that claims data, both internally and externally, would add value to the model. What really added to the credibility of the discussions was the physicians' constant insights into what EMR data they believed should be tested. For clinical variables, we discussed the importance of transforming the clinical variables using transformations like minimum value, maximum value, most recent, etc. Not only did we discuss what variables were evaluated; we also discussed how to handle missing values and outliers to produce the best model. Being able to show the importance or nonimportance of EMR data as an emerging variable in the final model as the model evolved allowed for important conversations between physicians and analysts, and added to the physician comfort level. Throughout the model building process, we brought the model building progress back to the physicians. We recognized that engaging the physicians in the model building process would not only accelerate model acceptance, it would allow for discussions between the physicians and analysts building the model that would eliminate confusion and inconsistencies in interpretations.

When the model was ready, we could not just run the model, create a score for each patient, and load it into the workflow. We needed to take a sample of patients and get physicians to validate the model. Throughout the validation process, we continued to discuss the most important variables that were being used. We asked a number of physicians to review by hand the patient, their score, and their EMR. However, just giving the patients the score would not give the physicians any magnitude of risk, either. The validation came by assigning the patients one of three classifications. The patients were classified as (1) low risk, (2) medium risk, or (3) high risk. For each patient's classification, we asked the physicians to give the patient one of three outcomes: (1) agree with the model, (2) disagree with the model, and (3) undetermined. Once we received the results, not only did we start to understand the patients who were marked levels 2 and 3 by the physicians, but we also needed to understand those marked level 1. It was important to understand from the physician's perspective why these patients were assigned that specific outcome. Meetings were held to make sure all the necessary feedback was clearly understood. By working with the physicians, the organization began to have conviction in the model and could then understand tweaks to the model and how these would affect the classifications. By now in this new age of analytics, it is understood that all models have some misclassification rate. Blindly not understanding the misclassification rate will erode model acceptance by the organization and could potentially misguide patient care.

The next set of people who are important to any analytical model deployment are the ones who will be using the model's output. Depending on the type of model, you may need to deploy the results in your EMR in an operational decision support system, a financial decision support system, or create a new “home” for the output. Depending on the model, the analytics team and end users have decided on the frequency of the model's output. Is this a model that runs daily, weekly, monthly, etc.? In this case, the model's output would need to be updated monthly. Even on a monthly basis, the expectation needs to be set on the day of the month that the model will update. If the model is supposed to execute over the weekend or on a holiday, will the run date for the model be moved to the next business day? What happens if there is a failure in the process? All of these questions require people to make decisions so that the appropriate workflow, scheduling, and execution can occur. In some cases, there may be a need to store historical results of the model's output. The importance of your model's playbook comes into play here as it is people who will document and make sure the details are there to ensure successful production automation. In this use case, we are updating the model monthly and it was decided the monthly history of the output would not be stored at this time. Other important considerations that would need to be planned for during this process are the need for a pilot and whether it will scale. Is there a testing period before release to production? Where does that testing reside? It is the importance of the people building and maintaining these models, deploying these models into production systems, and using the models to make sure they are in sync to ensure they are meeting end user compliance expectations.

Now that it is understood how the model's output will be used and where it should reside in the workflow, a team of people will be required to make sure the requirements are gathered and documented to coordinate the production automation workflow. This includes making sure all aspects of the workflow are vetted with the development team to make sure the hosting systems, execution of the model, and the operational system integration plan complete as planned to make sure the model output reaches its destination. Creating a communication plan is critical to understanding how error-handling and/or failed job processes are handled. In a worst-case scenario, the model's output is not updated as scheduled and the previous model's output remains in place with the appropriate communications to the end user. The notifications should be able to communicate effectively and succinctly what failed and mitigation timelines. Keeping the end user updated with information will allow the analytics team and development teams freedom to troubleshoot and fix the issues without the need to support phone calls from end users who were expecting the updates from the model. This stage of the workflow is critical and should not be rushed. Once a model is ready, there is always eagerness to move these models into production. In the lifecycle of analytics projects, do not let the return on investment of the project hasten the due diligence and attention to workflow details. It is important to start realizing the value to the patients as quickly as possible; however, a lack of focus from the people on the technical details in implementing the model can cause further delays.

During this implementation phase, there are three stages:

  1. Known knowns
  2. Known unknowns
  3. Unknown unknowns

How do you mitigate known knowns? Depending on the complexity, one team may not be able to complete their task until another team has finished theirs. You can plan and communicate around this risk. There are also known unknowns that need to be considered. There may be other priorities this project is competing with. You do not know if this is a risk yet, but potentially it could become one, so you can plan for contingencies. Then there are unknown unknowns. These are the risks that just pop-up: something that even the best planning may not catch. These risks can be minor or major and will be dealt with when they arise. It is important to make sure communication channels are established between the analytics team, the development and implementation team, and the end user to handle questions/issues that may arise, and that there is accountability in the playbook with the details around the people at each of the implementation stages and management of timelines during this stage. This is all reliant on people. Don't let failures at this stage jeopardize this model or future models.

Finally, the model is in production and being used. There is the team that will monitor and maintain the model and process. Since there has been a thorough playbook built by the analytics team, with the help of all other stakeholders, from the clinical team to the technical team, there is a common understanding of what model maintenance will look like. Model maintenance will include plans around removing the model from production if necessary and how the work would be completed that relied on model output, to transitioning to a new version of the model, and to retiring the model. As the use of analytics continues to grow, some models will not be required or could be incorporated into other models and/or should be retired. The analytics team needs to discuss with their end user what types of decision criteria will be used for model retirement or replacement. There will also be instances where the analytics team will initiate improvements to the model before degradation starts. One thing is known: models will degrade over time and the people building the model and using the model will need to have a documented plan to understand if the model just requires retraining, if it should be transitioning to a newer model, or for retiring the model. This plan should have built-in timelines to ensure end user workflow will not be disrupted or alternatives can be implemented. By incorporating the correct monitoring measures, there should be ample time that degradation alerts can be communicated and discussions between the analytics team and the end users can plan on the next steps. There should never be a surprise when you come into work and a model has been removed from production. During this stage, it is important for the analytics to have a clear plan in place to understand fully any analytical model's lifecycle.

This case study is a perfect example of how a team of teams works together to help patient care. The importance of people at every stage of the analytics process cannot be underestimated, misjudged, or undervalued. Each member of this process plays an important role and every subject matter expert needs to have a voice. There will be successes. There will be issues or problems. People are the resource that will bring ingenuity and innovative solutions. The importance of model sponsorship and the right team of “model validators” underneath ensures organization acceptance. The success of the project will be judged on the following objectives from the “model validators”:

  1. Does it work as designed? Did we assign a risk score to each patient identified? Yes.
  2. Do end users use it to help prioritize patients who have care gaps, identify high-risk patient lists, help schedule patients who are healthy for well-care visits, and provide a score to help clinicians understand relative risks? Yes.
  3. Does it meet the overall goals of the project to reduce healthcare costs and keep people healthy? To be determined.

We are accomplishing the first two of these. Reducing overall healthcare costs is a longer-term measure, not just something that will be judged a success over days, weeks, or months. It may take years to understand. We are confident that this proactive approach to population health will meet this objective and reduce healthcare costs.

PROCESS USE CASE – RATE REALIZATION MODEL

This case study will focus on process and the improvements advanced analysts can make to a seemingly simple insight, improving service line decision-making. A frustration that can encumber decision-making is having to make adjustments to data that lack accuracy and could impact the right decision. Managing service lines in an organization requires two important numbers: accurate revenue and costs to derive profitability. Most organizations can accurately determine the cost of their service lines. However, when it comes to revenue, that can be a bit complicated. We need to understand the “realized revenue” in order to derive true profitability and make business planning decisions. What is “realized revenue”? Many organizations are tracking two revenue numbers: (1) some type of model revenue from contracts; and (2) actual payments that are being received from government plans, commercial payers, patient payments, etc. The “realized revenue” is considered the revenue that is ultimately reported once a patient's account is closed or paid in full. It includes insurance payments, patient payments, and write-offs for contractual obligations, bad debt, or charity. In many instances, it takes months, even occasionally years for accounts to be considered paid in full. You might wonder how you can make business decisions if paid-in-full account data is months old, or longer?

The answer lies in developing a model that can predict this “realized revenue” for open accounts and deliver it on a timely basis into your decision-support data. Cleveland Clinic created a champion model tournament that uses eight different models with at least nine different techniques and 40 variables to produce “realized revenue” for every account that is open. Sounds simple enough, right? As the model was tested, validated, and being prepared for production, it was agreed that the model would be run weekly. That means, as new accounts are processed and moved into the decision support system, those new accounts that required a “realized revenue” would need to use the modeled revenue number as a proxy until the next model run. This model revenue number is normally overstated but was not considered material since it was only affecting one week's worth of data.

Now that you have a little background on the model and how often the model is being run, let's discuss the process required to get this model into production. Just like in any process, there was a problem that needed to be addressed. We needed to improve service line decision-making through predicting “realized revenue” for open accounts. The current status was that users of the data had three options to choose from: (1) use modeled revenue; (2) use payments as they were received and posted; or (3) make adjustments to modeled revenue or payments using their own judgment or history. Each of these options was not optimal. The first option would most likely overstate revenue. The second option would most likely understate revenue. The third option would just give you a number between options (1) and (2) and was reliant on end user judgment. If the end user left the organization or the service line area, a new end user would have to step in and apply their judgment. That could lead to all kinds of questions about how you made the adjustment instead of focusing on making a decision. The goal was to create a model that would produce realized revenue.

Next came the model development lifecycle. It started the process of data discovery and mining efforts to understand the different sources of data available, the number of different variables that could be considered in the model, and the amount of data that would be available for train and tune the model. Once this step was completed, the next steps in the process were to pull the data, prep the data, and transform the data for the model. This process flow that needed to be built had to deal with the complexities around handling missing data, imputations of some data elements, how to handle outliers, and log transformations of certain data elements. Next came model training and tuning. We utilized a four-step approach. First, we created a model hierarchy that needed to be followed. There were eight different models in the hierarchy. Second, we used a machine learning approach to evaluate different techniques. We used over nine different techniques for each of the eight different models and 40 variables from three different data sources. Third, we used a champion tournament to select the weekly winner. Fourth, we monitored model performance for degradation and enhancement opportunities. The final step in the model lifecycle was to validate the model to make sure it was reproducing results within acceptable limits (e.g. predicted “realized revenue” was between model revenue and actual payments).

Now that the first version of the model was created, it was time to move the model into production. We divided this into three steps. The first step is documentation. Even though we list this step here, it was really started at the beginning. If you remember, in Chapter 3 we discussed the need for a model playbook. This step is essential and is required before a model goes into production. Next is the production step. This step requires a workflow integration flowchart or diagram. The flowchart displays the steps that need to be completed in the correct order to ensure the technical folks have the roadmap necessary to move the model from your development area, to your Q&A area, and finally to your production area. This includes quality data management rules and signoffs before the data is migrated to your production area for end user consumption. The final step in the process is model management. This is also defined in your documentation step, but is the visual way of understanding model performance. In this case, we store weekly information for tracking average squared error (ASE) at various levels and use different control charts to signal potential issues, to identify degrading prediction errors, to determine periodic retraining efforts, and to help investigate data anomalies. An ASE of zero means your model has perfect accuracy, which we know is virtually impossible. In this case, we have expectations to minimize the ASE of the analytical model and strive to make sure the model performance adheres to certain thresholds.

This process would not have been successful without the efforts of and support of a multidisciplinary team. Having a good working team with strong project management certainly played an important role in the success of this project. During the model building phase, it was the collaboration between enterprise analytics and subject matter experts from revenue cycle management and revenue and reimbursement departments that helped inform, test, and validate the models. During the implementation and post-implementation phases it was the teamwork between enterprise analytics, business solutions (developers), and processing (data reconciliation) departments to get the data ready and available in the decision support system for the end users. The final piece of success comes from end user adoption. During this process, it was transparency with the end users and areas that were affected by the change. It is about change management before the results go live, letting them know change is coming, and creating educational opportunities to ease the stress before production goes live. Educational communication can come in many forms, from webinars to in-person townhalls to one-on-one department sessions. A successful tactic you can employ once you go live is to hold “office hours.” Remember, back in school there were times when you were shy and would not ask a question in front of the class or, once the class was over, your questions finally dawned on you? The perfect opportunity to get your questions answered was to visit your professor during his “office hours.” Staffing some open “office hours” is a great way to eliminate stress once the model goes live.

TECHNOLOGY USE CASE – POPULATION HEALTH MANAGEMENT APPLICATION

When we discuss “real-time” analytics, it can be used in the context of various situations. “Real-time” could mean having a model at the edge and using it with a patient to help inform during a patient visit, or produce financial outcomes based on asking questions during a meeting. “Real-time” could also mean having a model that can produce results within a few hours so that actions could be taken. “Real-time” could also mean having a model run daily that can produce results used within a decision support system. This case study will explore “real-time” in having a model that can produce results within a few hours so that actions could be taken by leveraging the right technology.

Patients in population health management programs are becoming a larger part of the fiscal responsibility of the organization. Current long-range planning methods did not have a method for understanding the impacts of these patients. We set out to build an application that would allow the organization to collect assumptions to drive models that would provide impacts that could be incorporated into long-range planning assumptions.

Technology was an enabler for this project. We were armed with a technology solution that would allow the team to be innovative and stretch the limits of analytics. The first step in this case study was a two-day brainstorming session around the functionality we wanted the application to have. The team laid out six functional elements that we wanted to be able to model:

  1. Rate and charge changes
  2. Movement of services
  3. Shared savings simulations
  4. Volume changes by location/payer
  5. Volume changes by location/institute
  6. Narrow network product simulations

The output of the application needed to be a 72-month rolling projection. It would also need to produce output that could be reported on by various dimensions, like location, type of service, service line, payer, etc. The output also needed to produce outcomes that could help the healthcare organization understand the risk of the assumptions for any given element or combination of elements. The team also had an extensive list of data requirements from our own data warehouse to third-party data to public data that would be used to help seed distributions and provide utilization guidance. The minimally viable product that was used during this initial case study excluded the third-party data and public data. Those enhancements remained in the backlog. Subsequently, a new demand model is in development that is using those enhancements.

The team decided to produce a two-stage model. The first stage would produce the necessary 72-month rolling forecast. Once the forecasts were completed, the second stage of the model would produce simulations for each month of the 72-month period. This would require plenty of computational power and storage requirements.

We will give a little background about each of the functional elements. Rate and charge changes would require the pricing assumptions during the 72-month period by payer by location by date. This would drive the revenue calculation requirements. Movement of services would require assumptions driving services from one or multiple locations to one or multiple target locations. Assumptions would be date driven. As services would move, an option to “backfill” vacated services with similar or different services was required. Shared savings contracts needed various inputs from population selection (commercial business/Medicare business) to the number of members in the contract to medical loss ratio targets and other inputs to calculate contract performance for only those members. Volume changes by location/payer would require assumptions around the shift in volume from one or more payers to one or more target payers by location by date. Volume changes by location/institute would require assumptions around the shift in volume from one or more institutes to one or more target institutes by location by date. Narrow network product simulations would require inputs on membership, utilization, and cannibalization rates.

As you can start to see, the complexity of the model just in handling the inputs becomes challenging. Add to that the complexity the simulations needed to account for. We would start with a bank of encounters. In this case, an encounter is an inpatient stay, starting at the admission date and running through the discharge date. The bank is made up of historical encounters, and in some cases is adjusted to present time revenue and cost metrics that describe the probabilistic interpretation of what types of encounters we will see during any month. This bank could be adjusted throughout the 72-month forecast, if necessary to account for changes in the distribution over time, either by industry thought leaders or internal input.

Put this all together and you have a model that is producing billions of rows of data. Being able to leverage in-memory and in-database technologies makes this type of project achievable. The key to this project was understanding that no decision was going to be made by changing parameters in a meeting, examining the results, and making a decision on the spot. The complexity of the interactions does not warrant this type of decision-making. This was a situation where we wanted to be able to collect and run a set of assumptions and produce output that would be available for the teams, like financial planning to be able to start consuming within hours. We would not have been able to produce results within hours in a traditional data warehouse setup. Being able to deliver results requiring large computational scale requires innovative approaches like in-memory and in-database technologies.

Don't let technology be the solution. First and foremost, let your team try ground-breaking pioneering approaches that stretch the limits of analytic capabilities for one-of-a-kind competitive advantage models based on the requirements laid out. This will lead to the appropriate technology solution.

INTEGRATING PEOPLE, PROCESS, AND TECHNOLOGY – ENHANCING CLEVELAND CLINIC WEEKLY OPERATIONAL MEETINGS/CORPORATE STATISTICS DASHBOARDS

We wanted to use this current case study as a great example of unifying people, process, and technology. While this case is currently in process, it shows the importance of unifying data, people, process, and technology. Over the course of time, organizations evolve the way they measure their business. Cleveland Clinic is no different. Cleveland Clinic is currently using the concepts discussed in John Doerr's Measure What Matters. It describes a system that involves defining objectives and uses key results to understand performance. An important concept in this type of goal-setting system is the transparency of these objectives and key results throughout the organization. It is important to understand how we were performing yesterday, a week ago, a month ago, etc. It is just as critically important to understand how we think we will be performing tomorrow, a week from now, a month from now, etc.

The organization was confronted with two difficult challenges it was looking to solve. The first one was to help alleviate the burden of preparing for operational meetings. The preparation for these operational meetings was labor intensive across those departments. It involved collecting, preparing, and analyzing data from different data sources, from actual volume to budget information to different analytics across various metrics, and compiling them into a format that could effectively run operational meetings on a weekly basis. The goal of providing leadership with actionable intelligence was to improve performance, which would also drive four additional secondary outcomes. First, we needed to build a process to support daily and weekly operating meetings. We would have to understand and build a process flow to support data collection to model output to data integration into dashboards. Second, we would simplify and streamline information delivery. We needed to be able to deliver data in a format that could easily be digested into new dashboards to run meetings and also be available for downloading by end users. Being able to download information from the dashboards would support additional analytics and visuals from ad hoc requests that would not be handled by the agreed upon meeting visuals. Third, this process would continue to ensure alignment and consistency in reporting of corporate statistics and financial performance management. Lastly, we would be leveraging enhanced statistical methodologies that would support our business cycle.

The second challenge was around producing forecast estimates for the corporate statistics dashboards. The current logic around producing these forecast estimates was rudimentary. It wasn't until using objectives and key results that brought to the forefront the need to revisit the forward-looking information on this dashboard. This is where the harmonious interactions between many departments, the building of a process, and the use of technology produces a pleasant piece of music. The information that was to be produced looked to solve relatively short-term time horizons, in this case a forecast that could help inform our business over the next two months.

We will discuss how we are successfully tackling these two challenges and understand how each of the objects is satisfied by unifying data, people, process, and technology.

Let's talk about data first. Without data, there would be no information. Believing that we can forecast the future is simply impractical. Think about the behaviors that influence your models. Take weather forecasting, for example. When you listen to a weather forecast, there is likely a “chance of,” “a percentage for precipitation,” or “a range in temperature.” What weather forecast is 100% accurate? Answer: since I am not a meteorologist, I will safely guess, virtually none. Think of the complexity of how weather forecasting is done. It requires taking observations of weather-related inputs, like temperature, barometric pressure, humidity, wind speed, wind direction, solar radiation, precipitation, etc., from land-based and sea-based reporting stations, normally hourly from automated reporting stations around the world, and processing this information into their computer models. You get the point: an extremely complex set of inputs are then modeled using many different forecast models using a complex set of mathematical relationships to produce results.

While your forecast models are probably not relying on taking global observations hourly, you are still challenged with the complexity of influencing behaviors and inputs that need to be defined. Part of understanding what data needs to be collected is knowing what problem you are trying to solve. As explained in our case study, we are looking to produce forecasts to alleviate operational challenges as well as producing daily forecasts for our corporate statistics dashboards for short-term planning. Armed with this knowledge, we were able to start exploring the data to prepare for our model. In this case, it wasn't necessary to collect data hourly. Nor was it necessary to collect data weekly or monthly. By collaborating with the decision-makers and end users, we were able to understand that having input data daily would be necessary to create the forecast models required to deliver outcomes successfully. We were able to leverage an extensive daily history of data to build forecast models for performance measures like surgeries, admissions, evaluation, and management (E&M) visits. The key data features of these models include data like operational case volumes, same data add-ons and cancelations, volumes by types of service settings like inpatient and outpatient, location types like hospitals and ambulatory surgery, clinical institutes like heart and vascular, neurology, digestive disease, cancer, etc., scheduled and filled slots, holiday schedules, national conference schedules, and unusual occurrences or events identified internally.

Let's explore people and process from a couple of different perspectives. As this project evolved, we knew that there would be two ways to think about the analytics. Back in Chapter 2, we discussed the importance of producing and consuming analytics. In Chapter 3, we examined the importance of a process framework around understanding the work, doing the work right, and doing the right work. In this project, we are producing a forecast, which we call the science of forecasting, and consuming the forecast, which we will call the art of management. Let's delve into what we mean by the science of forecasting. Believing that we can forecast the future with 100% accuracy is simply impractical. Think about the behaviors that influence your models. Whether your forecast models are simplistic in nature or use an extremely complex set of inputs to inform many different forecast models using a complex set of mathematical relationships to produce results, there is a science to forecasting.

While your forecast models are probably not relying on taking global observations hourly from approximately 1,200 weather stations, you are still challenged with the complexity of influencing behaviors and inputs that need to be defined and understood. The science is to produce an accurate forecast. In this case, “accurate” is based on the ability to take action and will be discussed momentarily in the art of management. The science required is twofold. First, the analysts will create one or more models. Some examples of the different models that are created are Auto Regressive Integrated Moving Average (ARIMA) with intervention and causal variables (ARIMAX), unobserved components model (UCM), and exponential smoothing models. These models are included in a champion tournament daily, where the best performing daily model's results are used. Second, the analysts will be tuning existing models or creating new models by identifying, improving, and/or eliminating inputs that do not add significant value to the final forecast model. More inputs are not necessarily better. This continuous improvement process around the forecast models will take into account the model's responsiveness to new inputs. Understanding the incremental benefits of improving your forecast models compared to the costs related to maintaining the models needs to be considered. Comparing the model's incremental performance gains while minimizing the people resources required to maintain the models needs to be clearly understood so they can be deployed on other projects. Some levels of accuracy may not be achievable and we often pursue those unrealistic levels leading to the consumption of resources that could be deployed elsewhere. There may also be costs related to collecting existing data or new data. Does the data exist in a format the models can consume? What resources would be required to make the necessary transformations from development to production? If there is new data that could possibly be tested, what resources are required to build data pipelines? Do you need to setup a process to incorporate the new data into the models? Analysts building the forecast models need to take into account the monthly seasonal effects and day-of-the-week cycles. The analysts will take into account unusual events or anomalies to understand if the noise created by those events should be modeled. The analysts will continue to evaluate new methods to add to the champion tournament. The last science aspect we will discuss is the measurement of the models. There are many outputs you can collect on forecasting performance. The analytics team needs to balance metric performance that can be understood by the end users, as well as more complex measurement techniques that the analytics team finds value. The analytics team tracks some performance metrics that are simpler conceptually and easier to interpret. Some of the most popular metrics in this category are:

  1. Mean absolute percentage error (MAPE) expresses the accuracy by converting the difference between your forecast and actual results as a percentage. You calculate this measure by taking the sum of the difference between your forecast and actual results and divide by the actual results for every point and then divide this value by the number of points in your defined period, in this case, a two-month period and then multiply this result by 100 to make it a percentage.
  2. Mean absolute error (MAE) examines the average of the absolute difference between your forecast and your actual results over a defined period of time. Since our forecast is for two months, we calculate the most recent MAE for the two-month period.
  3. Coefficient of (multiple) determination or adjusted R2 gives you some information about the goodness of fit of your forecast model. It provides a measure of explainability through how well your actuals are replicated by the model based on the proportion of total variation of outcomes explained by the model. The closer this value is to 1, the better the fit. This calculation is a little more involved and is provided for you in most forecast model performance outputs.

These metrics do have their drawbacks and educating your end users on how to use them is important. This will be discussed in the art of management below. The analytics team will store these metrics as well as other model performance metrics after every forecast run. This will allow us to build visuals over time and create alerts to help in monitoring the forecast model's performance. This will allow the team to monitor continuously and make improvements to the champion tournament.

Let's continue the discussion around the art of management. In the art of management, it is not about using the forecast exactly as it is predicted. The forecast model is constantly fed with new data, producing new forecasts with constantly changing predicted values and ranges. It is using these ever-changing forecasts that leads to what we like to call the “art of management.”

Let's go back to our weather forecast example. Think about all of the forecast outputs they are projecting: temperature, precipitation, humidity, wind speed and direction, and the duration of forecasts, from 12 hours and 24 hours to 3-day, 7-day, and 10-day or more forecasts, and this is just scratching the surface. How we consume these forecasts is based on the expectations of how we will use them. Let's discuss how we might use a weather forecast over the next 24 hours. The weather forecasting models can get the high temperature or low temperature correct, but maybe not during the hour they forecast. Think about how you want to consume the model. If your concern is making sure you have the correct clothing for the day, you are certainly interested in the high and low temperature range, as well as when those temperatures will occur. Think about how close you really need the forecast to be. If the forecast models are predicting a high temperature for the day of 80 degrees Fahrenheit, as long as the forecast is within your tolerance to influence your decision in changing your selection of outerwear for the day, you most likely do not think about the forecast if the actual high temperature for the day comes in at 78 degrees, 80 degrees, or 83 degrees Fahrenheit. The critical interest in the weather forecast model is if the model predicts a high temperature of 80 degrees Fahrenheit and the actual high temperature is 50 degrees Fahrenheit. This would surely alter your outerwear choices for the day.

Sure, it would be easy just to use the “science of forecasting” and blame the model if decisions do not pan out. We don't need a complex forecast model with elaborate data and processes to get an improved forecast. We need to understand our business process and the inputs that we can use to build a reliable model. Let's think about why we are producing a forecast. It is so that we can enact change. The forecast is produced to guide us in answering the following types of questions:

  • Can we make a decision?
  • What decision are we trying to influence?
  • At what point would you take action?
  • What is the acceptable range of outcomes at various volume sizes?
  • Do we understand the forecast bias in adjusting our decision-making process?

As you can see from the above questions, we have not included questions like:

  • How accurate is the forecast? What is the MAPE?
  • What is a good level of accuracy? What is the MAE?
  • Should we change the forecast based on recent data anomalies?

Why? The goal is to provide leadership with actionable intelligence from the forecast to enact change that will improve performance. An important piece in the art of management is educating the users on how to use the forecast, from the end-user preparing for the operational meetings to the senior leadership that will be using all of the information prepared to ultimately make a decision. It is to not focus on specific forecast metrics to determine “should we use the forecast.” Each of these metrics has flaws and needs to be understood in context so that “forecasts are not thrown out.”

Let's take a look at the forecast metrics we defined above to help understand in context how they should be considered. Depending on what level of aggregation you are reporting the metrics on can cause confusion and apprehension in using forecasts. No matter what level of aggregation you are reporting on, all metrics are correct. The metrics can be used in different situations. The mistake often made is comparing them to one another. Misusing these metrics can lead to focusing on questions like:

  • Is the forecast usable?
  • How can we improve the forecast?
  • Why doesn't this metric apply to all forecasts?
  • Is the forecast right?

George Box, one of the greatest statistical minds of the twentieth century, said, “All models are wrong.” This is going to be true of your forecasts, but understanding that they are useful is vital to consuming the forecast. The art of management is not using the forecast metrics to guide your discussion. It is important to understand what these metrics are telling you and how to use them in your decision-making process. Let's examine the mean absolute error (MAE) and the mean absolute percentage error (MAPE). Understanding how these two metrics can work together is important so as not to influence judgment on the trustworthiness of forecast models. Always keep in mind that the objective is to assess whether we can use the forecasts to enact change through the decision-making process.

As outlined above, we are producing forecasts for different metrics. For this discussion, we will compare and contrast the MAE and MAPE with the surgical volumes and evaluation and management visits forecasts. Since the case study is around our weekly operating meetings, we are calculating the MAE and MAPE on a weekly basis. Again, the forecast is daily and we are producing a two-month forecast. Defining the time periods for metric comparisons and contrasts is important. We could have discussed the daily, bi-weekly, or monthly MAEs and MAPEs; however, using the metrics at those aggregation points would not translate well into the process we are solving for the weekly operational meetings. We are calculating weekly MAEs and MAPEs for different dimensions, namely facilities and institutes for planning purposes and we will focus the discussion around institute comparisons to take out the dynamic of our organizational structure.

For simplicity, we will use the hypothetical values in Table 5.1 to guide our discussion. We needed to educate the end users not only on the MAE and MAPE calculations themselves, but on how to interpret the values and use them by institute and across institutes and across metrics. The MAEs and MAPEs for these metrics are volume-related and have different implications.

We will closely examine Scenario 1 for both weekly forecasts. What do they have in common? One can easily see the MAEs are identical for each independent forecast. Would you feel comfortable using the forecast for surgical volume or E&M volume? If so, what institutes would you consider using it for? Without focusing on anything else, you might be inclined to consider the surgical forecast as more accurate since the MAEs are 10 times lower than the E&M forecast. However, within each forecast, the MAEs can tell a different story. Some MAEs give you the confidence that your forecasts will provide actionable intelligence. Now, focus your attention on the MAPEs for each forecast. What story does that tell? Certain institute forecasts provide a percentage error that one feels comfortable in using. One thing you will notice is that there is no total for the forecast models. One reason is in the way in which we are using the forecast models. We are making operational planning decisions at the institute level and the overall performance tends to be less important for the overall process. This comes back to the question of should you target a specific metric in order to use a forecast? Should we arbitrarily set an accuracy objective? What would happen if you set an objective of 5% accuracy or you need the forecast to be within a certain volume? The nature of forecasting can set unrealistic expectations of model performance. Does that mean we cannot use the surgical forecasts or E&M forecasts for Institutes C and D in Scenario 1?

Table 5.1 Surgical and E and M Forecast Model Performance

Source: Author.

Scenario 1 Scenario 2 Scenario 1 Scenario 2
Average Weekly Surgical Volume MAE MAPE MAE MAPE Average Weekly E&M Volume MAE MAPE MAE MAPE
Institute A 1500 20 1.3% 125 8.3% 20,000 250 1.25% 1,000 5.0%
Institute B 500 20 4.0% 15 3.0% 5,000 250 5.0% 750 15.0%
Institute C 100 20 20.0% 50 50.0% 2,500 250 10.0% 100 4.0%
Institute D 25 20 80.0% 2 8.0% 1,000 250 25.0% 100 10.0%

Let's transition to Scenario 2 before we tie this table back to the important concept of the art of management. At first glance, there are no commonalities between the forecasts. The MAEs are all different, as well as their respective MAPEs. What would happen if you set an objective of 5% accuracy or you need the forecast to be within a certain volume in this case? Could we only use Institute B's surgical forecast and Institutes A and C's E&M forecasts? All of the questions explored above in Scenario 1 can be asked of Scenario 2 performance results. The one common thread that can be weaved together in Scenario 1 and Scenario 2 is going back and revisiting why we are producing a forecast. It is so that we can modify our current operational plan. Remember those questions related to the art of management?

  • Can we make a decision?
  • What decision are we trying to influence?
  • At what point would you take action?
  • What is the acceptable range of outcomes at various volume sizes?
  • Do we understand the forecast bias in adjusting our decision-making process?

Do the MAEs and MAPEs give you specific guidance? Should you rely solely on those metrics across institutes and metrics? The answer is no. It's about working together; analysts, end-users, and senior leaders need to understand fully the implications the forecast will have in the operational meetings and how each of the questions above plays a role. Each question above should not be thought of in a silo nor should you only consider a combination of questions. You should not rely only on MAEs or MAPEs or Adjusted R2s. This is a continuous collaboration and education effort that goes both ways between the analytics team and the operations team. The analytics team is constantly educating the end users on model performance metrics and how those metrics should be consumed to influence the decision-making process. The operations team is constantly educating the analytics team around anomalies, communicating planning, intervention, and/or process changes to enhance the forecasts. It is these five driving questions that we constantly keep at the forefront to make sure our forecasts are actionable.

The ultimate question is, can you make a decision? This is not about MAEs or MAPEs. This is not about changing the current plan. This is about the forecast providing support and intelligence in the decision-making process. Forecasts are not produced so we can make a change, they are produced to provide insight into should we make a change. Is the forecasting providing intelligence into volumes that will lead to new actions that you would not have considered yesterday or a week ago or two weeks ago, or whatever the timeframe? The forecasts should be engaging the interactions between the analytics team and the operations team to understand the underpinnings of the forecast models to make sure the forecast models are producing actionable insights. It is not productive for the end users to set MAPE or MAE tolerances. Trying to set general MAPE or MAE guidelines is self-defeating. All models are different and have unique characteristics driving them and to apply general accuracy measures to each model creates unrealistic expectations of performance. Focusing on these measures alone and not the entire spectrum will put you on the hamster wheel to nowhere. The question, “Can you make a decision?” seems pretty simple to answer. Isn't it yes or no? In reality, it is quite difficult because it is not about the MAEs or MAPEs. “Can you make a decision?” is the driving question that has multiple sub-questions that need to be understood and considered to incorporate into the models. The key takeaway for this question is to make sure you consider those important sub-questions driving the process of making a decision. How do holiday and physician staffing levels play a role? What are the financial performance targets we budgeted for? What unusual events did we or can we account for? How are seasonal cycles affecting the model? This is the art of management that drives successful collaborations between teams to produce a forecast that will allow senior leaders the confidence in decision-making.

What decisions are we trying to influence? Collaborating with end users to understanding staffing decisions? More, operational metrics? Are we looking to move scheduled surgeries or visits? Are there optimization consequences involved?

At what point would you take action? Forecasts become less reliable in the future. Forecast errors in the short-term are less than the longer-term forecast. Let's say it is September 1 and you are using the forecast and it has predictions out through the end of October. Are you starting to plan for the end of October or are you more worried about the projects for the middle or end of September? As you continue to close the timeline and end-of-October forecasts are revised, do you change your decision-making process? That is, at what point would you start taking action? Are we starting to make planning decisions two months out and as it gets closer?

Do we understand forecast bias and should we make adjustments? Not all forecasts produce consistent forecast bias. Some models may systematically over-forecast; some models may systematically under-forecast. Some of the bias could be data-related. That is, is the most recent data weighing heavier or lighter in the forecast? Some of the bias can be human-related, that is, setting weights for inputs based on gut feel. Let's assume we are talking about adjusting for holidays. What weights should we use? We can categorize the holidays into major or minor and assign a number that will influence major holidays differently than minor holidays. What do we consider major or minor holidays? Who makes that determination? How do we educate on forecast bias and how do we adjust our decision-making process? How will forecast bias change over time? Could we do more harm than good by trying to adjust for bias? In general, if you understand bias, you can account for it by changing the forecast routinely. This is the art of management that makes sure forecast bias is successful.

One thing a forecast does not have is human business knowledge. Some might argue that a forecast has human business knowledge. It does to some degree. It has variables and information from business cycles, processes, etc. that we can translate into use in the model. The thing the forecast models do not necessarily reflect timely and accurately are changes in business processes or that “gut” intuition that management has. The forecast has business knowledge incorporated into it, like holidays, vacation schedules, school breaks, insurance benefits, etc., but sometimes up-to-the-minute business knowledge trumps the scientific forecast. Maybe there was a process change or a recent uptick in the flu that has not been accounted for in the model yet. There will be those times when the forecasts are projecting lower or higher numbers that will cause you, the decision-maker, to pause and alter your decision. It is understanding those biases that allows your judgment to guide you.

During this project, we are continuing to use the Cleveland Clinic Continuous Improvement model. As we think about Figure 3.1, we can focus on the challenges of understanding the work through engaging with our end users, doing the work right, and doing the right work.

Understanding the work through engaging with our end users was not just a 30-minute meeting with the outcome of “We would like you to produce a forecast so we can run our operations meetings.” It meant setting up a reccurring set of check-ins during the project to make sure requirements were gathered, enhancements were made, and deliverables were tested. It allowed the team to understand the product needed to run the weekly operations meeting. Consistent end user engagement builds trust, understands the end user's pain points, and allows you to manage expectations when deliverables cannot alleviate those pain points, thus offering opportunities to brainstorm solutions or bring critical decisions to the sponsors.

Doing the work right is giving the team the freedom to deliver an innovative solution that incorporates the steps outlined in Figure 3.5, the evolving process workflow. It allows the team to use lean principles to build a process flow from data collection to running the forecast models to integrating output from model with other data sources, like the budget data to create a meaningful data set that can be used to run the meetings delivering actionable insights to improve performance. At the time of this writing, the team was in the final stages of coding an automated solution that will run daily that will allow a championship tournament of all the forecast models created and to deliver the best result. This automated solution will include the final steps of incorporating the data from these weekly operational meetings into our corporate statistics dashboards.

Doing the right work was a by-product of understanding the work. This gave the teams a backlog of enhancements and a task list that was managed through weekly huddles. It gave the teams a forum to share accomplishments, discuss what was working and what wasn't working, discuss outputs of the forecasts for reasonableness and reliability, discuss priorities, and inform the analytics team of interventions of process changes to consider.

Finally, it is no secret that having a technology platform with analytics at the core is what makes this possible. The complexity of the requirements of this case study, the amount of data transformation involved throughout the various processing stages, the amount of data coordination between forecast models, the amount of data coordination between forecast output and other data sources, and producing output that can be used by visual means would have definitely been a challenging task in a traditional data warehouse environment. Projects like this will only grow in size and complexity and being able to use technology as an enabler and not a solution will lead to success. Whether you have the ability to use in-memory processing, in-database processing, vendor or open source solutions, or a cloud-based solution, technology is going to play a key role in delivering a computation platform to handle the next generation of analytical solutions. As we think about enhancements to our weekly operating meeting that could include weather data and change data capture, that vast amount of data will require a technology solution to be able to leverage all of these data sources.

CASE STUDY TAKEAWAYS

The success of any project requires a shared sustainability effort between all teams. The analytics team will have responsibilities and the weekly operating team will have responsibilities. Some responsibilities will be daily in nature, some will originate from meetings, and some might be initiated from process changes. The analytics team needs to deliver with a daily production model. This requires constant monitoring of the process to ensure it can run to meet processing deadlines. When the process “breaks,” the team needs to respond and understand what went wrong and make the necessary adjustments in coding and scheduling on a timely basis so forecast data does not become too outdated. Continuous education is a key element. We need to educate end users on the inputs and outputs and how the models work to make sure they are making informed decisions. It will require continuous monitoring of the forecast models for the following four reasons:

  1. To deliver dynamic model recalibration
  2. To understand important model key performance indicators
  3. To communicate to key stakeholders changes or interventions potentially impacting results
  4. To explore new features

The weekly operating team would be required to interpret the results in context for the meetings. They would need to identify and investigate data anomalies with the appropriate facilities and/or clinical institutes to provide context back to the analytics team. This important responsibility enables the analytics team to make sure forecasts are tuned properly if necessary. The team would need to help the analytics team, based on planning and intervention methods. Clearly articulating and communicating interventions or process changes will enable the analytics team to incorporate those events to enhance the forecast.

The final sustainability effort is held jointly. It requires regular check-ins across all levels of the organizational hierarchy. The cadence of these check-ins will differ, depending on what level you are talking about. For example, senior-level executives will probably need a check-in at most quarterly. Check-ins for the end users and analytics staff will be more often, bi-weekly to even weekly. These check-ins are important because they create a feedback loop to dynamically enhance the forecasts. It creates collaboration opportunities around model tuning and education of model features. Even during this first round of forecast model building, the interaction and feedback between all teams continues to produce a list of data enhancements to be explored. A couple of examples are weather-related data like temperature, snowfall (being in the Midwest, travel can be difficult on occasion), pollen, etc. and seasonal flu data from the Centers for Disease Control.

CONCLUSION

Unifying people, process, and technology is a strategy that requires enterprise-wide support. It cannot be successful without leadership that sponsors (and funds, depending on your organizational model) the projects. These projects require people, process, and technology to be integrated and to work together. In our experiences, success is dependent on a team-of-teams approach – from the analytics team to the development team to the end users. Having the right framework surrounding people, process, and technology will provide you with the roadmap for success. It requires a cultural change that needs to be applied daily. Strategies are not implemented overnight. It is a change that takes time. Only your organization will know its timeline. Every journey seems to start out slow. Be patient. Be innovative. The quest for intelligence is rewarding. Don't let a disappointment or two derail you. Disappointment is always going to be a part of the journey. Finding engaged leaders where you can turn projects into success stories is the necessary fuel to keep the strategy burning. The same can be said for the front-line caregivers. Those delivering the analytical solutions and seeing the fruits of their labors in action is also fuel to keep the strategy active.