Jeffrey M. Simmons, Samir S. Shah
Adults and children only receive recommended evidence-based care about half the time. The gap between knowledge and practice widens to a chasm in part because of variations in practice and disparities in care from doctor to doctor, institution to institution, geographic region to geographic region, and socioeconomic group to socioeconomic group. Furthermore, it is estimated that it takes about 17 yr for new knowledge to be adopted into clinical practice.
In addition to appropriate care that patients do not receive, U.S. healthcare systems also deliver much care that is unnecessary and waste many resources in doing so. This overuse and waste is one key driver of the disproportionate costs of care in the United States compared with other developed countries’ delivery systems (in 2016, the United States spent about twice as much per capita, adjusting for gross domestic product (GDP), on healthcare compared to the average of peer wealthy nations). It is estimated that more than one quarter of all U.S. healthcare spending is waste. Gaps in appropriate care, combined with overuse and high costs, have driven conversations about the need to improve the value of care, which would mean better quality at lower overall costs. Choosing Wisely , an initiative initially sponsored by the American Board of Internal Medicine and subsequently endorsed by the American Academy of Pediatrics (AAP), asked medical societies to identify practices typically overused that clinicians could then make collective efforts to address.
Quality improvement (QI) science has become a predominant method utilized to close gaps and improve value. Initially focused on improving performance and reliability in care processes, more recently, in part inspired by the Institute for Healthcare Improvement's Triple Aim approach, QI is being used to improve value for populations of patients by focusing more on outcomes defined by patients’ needs. The Quadruple Aim approach adds the 4th dimension of healthcare worker experience or joy in work to focus delivery systems on the need to enhance the resiliency of the clinical workforce in order to sustain high-value care approaches.
The Institute of Medicine (IOM) defines quality of healthcare as “the degree to which healthcare services for individuals and populations increases the likelihood of desired health outcomes and are consistent with current professional knowledge.” This definition incorporates 2 key concepts related to healthcare quality: the direct relationship between the provision of healthcare services and health outcomes, and the need for healthcare services to be based on current evidence.
To measure healthcare quality, the IOM has identified Six Dimensions of Quality : effectiveness , efficiency , equity , timeliness , patient safety , and patient-centered care . Quality of care needs to be effective, which means that healthcare services should result in benefits and outcomes. Healthcare services also need to be efficient, which incorporates the idea of avoiding waste and improving system cost efficiencies. Healthcare quality should improve patient safety , which incorporates the concept of patient safety as 1 of the key elements in the Six Dimensions of Quality. Healthcare quality must be timely , thus incorporating the need for appropriate access to care (see Chapter 5 ). Healthcare quality should be equitable , which highlights the importance of minimizing variations as a result of ethnicity, gender, geographic location, and socioeconomic status (SES). Healthcare quality should be patient centered , which underscores the importance of identifying and incorporating individual patient needs, preferences, and values in clinical decision-making. In pediatrics, the patient-centered dimension extends to family-centeredness, so that the needs, preferences, and values of parents and other child caregivers are considered in care decisions and system design.
The IOM framework emphasizes the concept that all Six Dimensions of Quality need to be met for the provision of high-quality healthcare. Collectively, these concepts represent quality in the overall value proposition of quality per cost. From the standpoint of the practicing physician, these 6 dimensions can be categorized into clinical quality and operational quality . To provide high-quality care to children, both aspects of quality—clinical and operational—must be met. Historically, physicians have viewed quality to be limited in scope to clinical quality, with the goal of improving clinical outcomes, while considering improving efficiency and patient access to healthcare as the role of healthcare plans, hospitals, and insurers. Healthcare organizations, which are subject to regular accreditation requirements, viewed the practice of clinical care delivery as the responsibility of physicians and limited their efforts to improve quality largely to process improvement to enhance efficiencies.
The evolving healthcare system requires physicians, healthcare providers, hospitals, and healthcare organizations to partner together and with patients to define, measure, and improve the overall quality of care delivered. Concrete examples of the evolving U.S. perspective include the widespread adoption of Maintenance of Certification (MOC) requirements by medical-certifying bodies, which require providers to engage in activities that improve care in their practices, and the core quality measurement features and population health incentives of the Patient Protection and Affordable Care Act (ACA) of 2010. The ACA also established the Patient-Centered Outcomes Research Institute (PCORI) to develop a portfolio of effectiveness and implementation research that requires direct engagement of patients and families to partner in setting research priorities, formulating research questions, and designing studies that will directly impact the needs of patients to improve the value of the research.
Quality is broader in scope than QI. The approach to quality includes 4 building blocks. First , the standard for quality must be defined (i.e., developing evidence-based guidelines, best practices, or policies that guide the clinician for the specific clinical situation). These guidelines should change based on new evidence. In 2000–2001 the AAP had published guidelines for care of children with attention-deficit/hyperactivity disorder (ADHD). Subsequently, in 2011, these were updated to highlight a greater emphasis on behavioral interventions rather than pharmacologic options based on new evidence. Similarly, the AAP has emphasized that guidelines evolve to include greater consideration of value in care, an example being the update to the clinical practice guideline for urinary tract infection in 2011, which called for a decrease in the use of screening radiologic tests and prophylactic antibiotics in certain populations of children due to a lack of cost-effectiveness. Second , gaps in quality need to be closed. One key gap is the difference between the recommended care and the actual care delivered to a patient. Third , quality needs to be measured . Quality measures can be developed as measures for accountability and measures for improvement. Accountability measures are developed with a high level of demonstrated rigor because these are used for measuring and comparing the quality of care at the state, regional, or health system (macro) level. Often, accountability measures are linked to pay-for-performance (P4P) incentive arrangements for enhanced reimbursement at the hospital and individual physician level. In contrast, improvement measures are metrics that can demonstrate the improvement accompanying a discrete QI project or program. These metrics need to be locally relevant, nimble, and typically have not had rigorous field testing. Fourth , the quality measurement approach should be used to advocate for providers and patients. For providers, meeting quality goals should be a key aspect of reimbursement if the system is designed to incentivize high-value care. At the population level, quality measurement strategies should advocate for preventive and early childhood healthcare, improving the value of care by decreasing costs across a patient's life span.
Lastly, many quality measurement systems have attempted to be more transparent with clinicians and patients about costs of care. Because more direct costs have been shifted to patients and families through widespread adoption of high-deductible insurance plans (i.e., families experience lower up-front insurance coverage costs but pay for certain acute healthcare expenses out-of-pocket until the preset deductible is met), better awareness of costs has become a more effective driver of improvement in value, in part by reducing overuse.
Guidelines need to be developed based on accepted recommendations, such as the Grading of Recommendations Assessment, Development and Evaluation (GRADE ) system for rating the quality and strength of the evidence, which is crucial for guideline development. Guidelines must adopt a high level of transparency in the development process. This is particularly relevant in the pediatric setting, where there may be limited research using methods such as randomized controlled trials (RCTs), which would have a high level of rating from an evidence standpoint. Because guidelines and policies related to quality need to be interpreted for specific settings, they should not be interpreted as “standards of care.”
The applied science of QI currently in use in healthcare is also firmly grounded in the classic scientific method of observation, hypothesis, and planned experimentation. There are 4 key features of the applied science of quality improvement: appreciation of systems, understanding variation, knowledge theory, and psychology of change. In addition to this theoretical framework, statistical analytic techniques evolved to better evaluate variable systems over time. While each derives key features from this applied scientific foundation, multiple QI methodologies are currently in use in healthcare. At their most parsimonious level, each method can be described as a 3-step model: Data → Information → Improvement. Quality needs to be measured. Data obtained from measurement needs to be converted into meaningful information that can be analyzed, compared, and reported. Information must then be actionable to achieve improvements in clinical practice and health systems’ processes.
The Model for Improvement is structured around 3 key questions: (1) What are we trying to accomplish? (2) How will we know that a change is an improvement? and (3) What change can we make that will result in improvement? Clarifying the first question, the goal , is critical and is often a step skipped by clinicians, who typically already have change ideas in mind. The second question is about defining measures, with an emphasis on practicality and efficiency. The third question is about defining testable ideas for improvement, which are subsequently tested using a framework of rapid cycle improvement, also known as the plan-do-study-act (PDSA) cycle (Fig. 4.1A ). The PDSA cycle is typically aimed at testing small, care process changes in iterative, rapid cycles. After discrete testing periods, results are analyzed, and the next cycle of change testing is planned and implemented (i.e., multiple PDSA cycles, often called a PDSA ramp , build on previous learning from PDSAs; Fig. 4.1B ). Valuable information can be obtained from PDSA cycles that are successful, and those that are not, to help plan the next iteration of the PDSA cycle. The PDSA cycle specifically requires that improvements be data driven. Many clinicians attempt to make changes for improvement in their practice based on clinical intuition rather than on interpretation of empirical data.
The Model for Improvement has been successfully used in the Vermont Oxford Network (VON) to achieve improvements in care in the neonatal intensive care unit (NICU) setting. The VON is a global network of collaborating NICUs involved in several studies that have favorably impacted the care of newborns. An example of a successful VON QI effort is a project aimed at reducing rates of chronic lung disease in extremely-low-birthweight infants. Clinical teams participating in this improvement effort used special reports from the VON database, reviewed the available evidence with content faculty experts, and then identified improvement goals. The teams received QI training through conference calls and emails for 1 yr. This effort resulted in a 37% increase in early surfactant administration for preterm infants.
One successful QI collaborative using the improvement model in the outpatient setting is related to improvement in remission rates and reduction in systemic corticosteroid use among children with inflammatory bowel disease (IBD, Crohn disease or ulcerative colitis). This work was supported by the ImproveCareNow Network (https://improvecarenow.org/ ), a learning health system. A learning health system is a collaborative endeavor organized around communities of patients, clinicians, and researchers working together to integrate research with QI (i.e., knowledge dissemination and implementation) to improve care delivery while advancing clinical research. The network model leverages the inherent motivation of participants to engage and contribute in a collaborative manner. Participants are supported by development of standard processes, such as common approaches to data transfer, measurement, and reporting, as well as emphasis on data transparency, and share knowledge, tools, and resources to accelerate learning and facilitate uptake of useful innovations. For the IBD network, outpatient gastroenterology practices standardized treatment approaches to align with existing evidence though QI interventions adapted to local circumstances, and therapeutic decisions for individual patients remained at the discretion of physicians and their patients. This network also developed methods to more fully engage patients, particularly adolescents, and their caregivers through the use of social media, which helped drive improvement in some of the clinical behavior change aspects of the work.
Six Sigma is related to the reduction in undesirable variation in processes. There are 2 types of variations in a process. Random variation refers to the variation that is inherent in a process simply because the process occurs within a system. Random variation in processes is expected in any system. In contrast, special cause variation refers to nonrandom variation that can impact a process and implies something in the system has been perturbed. For example, when tracking infection rates in a nursery, a sudden increase in the infection rates may be secondary to poor handwashing techniques by a new healthcare provider in the system. This would represent a special cause variation; once this provider's practice is improved, the system perturbation is resolved, and the infection rates will likely go back to the baseline level. Alternatively, improvement ideas are intended to perturb the system positively such that outcomes improve, ideally without exacerbating the variation in the system (Fig. 4.2 ). Six Sigma attempts to provide a structured approach to unwanted variations in healthcare processes. Six Sigma approaches have been successfully used in healthcare to improve processes in both clinical and nonclinical settings.
Lean methodology focuses on reducing waste within a process in a system. Fig. 4.3A illustrates the steps in the process of a patient coming to the emergency department (ED). After the initial registration, the patient is seen by a nurse and then the physician. In a busy ED, a patient may need to wait for hours before registration is complete and the patient is placed in the examination room. This wait time is a waste from the perspective of the patient and the family. Incorporating the registration process after placing the patient in the physician examination room can save time and minimize waste (Fig. 4.3B ). Lean methods have been successfully used in several outpatient and inpatient settings with resulting improvements in efficiency. Lean principles have also been adopted as a core strategy for many children's hospitals and health systems with the goal of improving efficiencies and reducing waste. These efforts can improve aspects of quality while also typically reducing costs.
Management sciences, also known as operations management , stems from operations research and refers to the use of mathematical principles to maximize efficiencies within systems. Management sciences principles have been successful in many European healthcare settings to optimize efficiencies in outpatient primary care office settings, inpatient acute care hospital settings, and surgical settings including operating rooms, as well as for effective planning of transport and hospital expansion policies. Management sciences principles are being explored for use in the U.S. healthcare system; one technique, discrete event simulation , was used at the Children's Hospital of Wisconsin to plan the expansion of the pediatric critical care services with the goal of improving quality and safety. The discrete event simulation model illustrated in Fig. 4.4 depicts the various steps of the process in a pediatric intensive care unit (PICU). Patients stratified across 3 levels of severity (low, medium, high) are admitted to the PICU, are initially seen by a nurse and physician, then stay in the PICU with ongoing care provided by physicians and nurses, and finally are discharged from the PICU. The discrete event simulation model is a computer model developed using real estimates of numbers of patients, number of physicians and nurses in a PICU, and patient outcomes. Discrete event simulation models are created using real historical data, which allows testing the what if scenarios, such as the impact on patient flow and throughput by increasing the number of beds and/or changing nurse and physician staffing.
Another management sciences technique, cognitive mapping , measures the soft aspects of management sciences, as illustrated in Fig. 4.5 . Cognitive mapping highlights the importance of perceptions and constructs of healthcare providers and the way these constructs are linked in a hierarchical manner. Goals and aspirations of individual healthcare providers are identified by structured interviews and are mapped to strategic issues and problems, and options. By using specialized computer software, complex relationships can be identified to better understand the relationships between different constructs in a system. A discrete event simulation model views patient throughput based on numbers of beds, physicians, and nurses, and accounts for differences in patient mix. It does not account for many other factors, such as individual unit characteristics related to culture. By interviewing healthcare providers, cognitive maps can be developed that can help to better inform decision-making.
QI efforts need to be organized around a theory of how the desired changes in outcome will be achieved. Multiple tools are available to help organize a QI team's thinking and execution. These tools typically help teams organize work into discrete projects or phases, and some of them also help teams develop change ideas.
Key driver diagrams (KDDs) are a tool to organize the theory of learning that underpins a QI project (Fig. 4.6 ), using the Model for Improvement. Important aspects of a KDD include a statement of the specific aim or improvement goal, a list of the key themes, or drivers, that are theorized to require improvement in order to achieve the aim, and lastly a list of the discrete change ideas or initiatives to be tested to determine whether or not they affect discrete drivers, and therefore the overall aim. Since most system outcomes are driven by multiple factors, a KDD allows a QI team to depict a theory that addresses multiple factors. Similarly, Lean and Six Sigma projects use a tool called an A3, that in addition to organizing the theory of a project, also prompts teams to assess the current state, and consider timelines and personnel for planned change (examples available at https://www.lean.org/common/display ).
There are additional QI tools to help assess the current state of a system to better understand how to improve it. One, the failure modes and effects analysis (FMEA) , also helps teams develop change ideas (Fig. 4.7 ). Starting with a map of the processes in the current system, FMEA then asks teams to investigate and brainstorm the many ways discrete processes can go wrong—the failure modes . Once failure modes are identified, teams begin to develop discrete interventions or countermeasures to address the failures (see Chapter 5 ). A similar tool, the fishbone or cause-and-effect diagram, is organized around key components in a system (e.g., people, material, machines) and helps teams catalog how deficiencies in each component can affect the overall outcome of a system.
A final key tool to help teams prioritize action is a Pareto chart , which organizes system deficiencies in terms of their prevalence (Fig. 4.8 ). A Pareto chart typically displays the individual prevalence of discrete problems, determined by baseline analysis of data, as well as the cumulative prevalence, helping teams see which problems should be addressed first, to maximize impact on the overall outcome.
Robust quality indictors should have clinical and statistical relevance. Clinical relevance ensures that the indicators are meaningful to patient care from the standpoint of patients and clinicians. Statistical relevance ensures that the indicators have measurement properties to allow an acceptable level of accuracy and precision. These concepts are captured in the national recommendations that quality measures must meet the criteria of being valid, reliable, feasible, and usable (Table 4.1 ). Validity of quality measures refers to the measure being an estimate of the true concept of interest. Reliability refers to the measure being reproducible and providing the same result if retested. It is important that quality measures are feasible in practice, with an emphasis on how the data to support the measures are collected. Quality measures must be usable , which means that they should be clinically meaningful. The Agency for Healthcare Research and Quality and the National Quality Forum have provided specific criteria to be considered when developing quality measures.
Table 4.1
Quality indicators can be used to measure the performance within 3 components of healthcare delivery: structure, process, and outcome. Structure refers to the organizational characteristics in healthcare delivery. Examples of organizational characteristics are the number of physicians and nurses in an acute care setting and the availability and use of systems such as electronic health records. Process measures estimate how services are provided; examples are the percentage of families of children with asthma who receive an asthma action plan as part of their office visit and percentage of hospitalized children who have documentation of pain assessments as part of their care. Outcome measures refer to the final health status of the child; examples are risk-adjusted survival in an intensive care unit setting, birthweight-adjusted survival in the NICU setting, and functional status of children with chronic conditions such as cystic fibrosis.
It is important to distinguish between measures for accountability and measures for improvement. Measures, particularly measures for accountability that may be linked to attribution and payment must be based on a rigorous process (Fig. 4.9 ). This can be resource intensive and time-consuming. In contrast, measures for improvement serve a different purpose—to track incremental improvements linked to specific QI efforts. These may not undergo rigorous testing, but they have limited applicability beyond the specific QI setting.
Quality data can be quantitative and qualitative. Quantitative data includes numerical data, which can be continuous (patient satisfaction scores represented as a percentage with higher numbers indicating better satisfaction) or categorical (patient satisfaction scores from a survey using a Likert scale indicating satisfactory, unsatisfactory, good, or superior care). Data can also be qualitative in nature, which includes nonnumerical data. Examples of qualitative data can include results from open-ended surveys related to the satisfaction of care in a clinic or hospital setting.
Data measuring quality of care can be obtained from a variety of sources, which include chart reviews, patient surveys, existing administrative data sources (billing data from hospitals), disease and specialty databases, and patient registries, which track individual patients over time. Sources of data vary in terms of reliability and accuracy, which will influence rigor and therefore appropriate-use cases for the data; many national databases invest significant resources in implementing processes to improve data reliability and accuracy.
It is important to distinguish between databases and data registries. Databases are data repositories that can be as simple as a Microsoft Excel spreadsheet or as complex as relational databases using sophisticated servers and information technology platforms. Databases can provide a rich source of aggregated data for both quality measurement and research. Data registries allow tracking individual patients over time; this dynamic and longitudinal characteristic is important for population health management and QI.
Data quality can become a significant impediment when using data from secondary sources, which can adversely impact the overall quality evaluation. Once data on the quality indicator have been collected, quality measurement can occur at 3 levels: (1) measuring quality status at one point in time (e.g., percent of children seen in a primary care office setting who received the recommended 2-year immunizations); (2) tracking performance over time (e.g., change in immunization rates in the primary care office setting for children 2 yr of age); and (3) comparing performance across clinical settings after accounting for epidemiologic confounders (e.g., immunization rates for children <2 yr of age in a primary care office setting stratified by race and SES as compared to the rates of other practices in community and national rates).
Pediatric quality measures are being developed nationally. Table 4.2 lists some currently endorsed pediatric national quality indicators.
Table 4.2
Examples of National Pediatric Quality Measures
NQF PEDIATRIC QUALITY INDICATORS | NQF-ENDORSED INPATIENT MEASURES AMONG PICUs | NQF-ENDORSED INPATIENT PEDIATRIC CARE MEASURES | NQF-ENDORSED OUTPATIENT PEDIATRIC CARE MEASURES |
---|---|---|---|
Neonatal bloodstream infection rate Transfusion reaction Gastroenteritis admission rate |
PICU standardized mortality ratio PICU severity-adjusted length of stay PICU unplanned readmission rate |
CAC-1 relievers for inpatient asthma CAC-2 systemic corticosteroids for inpatient asthma Admit decision time to ED departure time for admitted patients Follow-up after hospitalization for mental illness (FUH) NHSN Catheter-Associated Urinary Tract Infection (CAUTI) outcome measure NHSN Central Line-Associated Bloodstream Infection (CLABSI) outcome measure Percent of residents or patients assessed and appropriately given the pneumococcal vaccine (short stay) Restraint prevalence (vest and limb) Validated family-centered survey questionnaire for parents' and patients' experiences during inpatient pediatric hospital stay Nursing hours per patient day Preventive care and screening: screening for clinical depression and follow-up plan Skill mix (RN, LVN/LPN, UAP, and contract) |
Appropriate testing for children with pharyngitis CAHPS clinician/group surveys (adult primary care, pediatric care, and specialist care surveys) Child and adolescent major depressive disorder: diagnostic evaluation Child and adolescent major depressive disorder: suicide risk assessment Follow-up after hospitalization for mental illness (FUH) Median time from ED arrival to ED departure for discharged ED patients Pediatric symptom checklist (PSC) |
CAC, Children's Asthma Care; CAHPS, Consumer Assessment of Healthcare Providers and Systems; ED, emergency department; HBIPS, hospital-based inpatient psychiatric services; LVN/LPN, licensed vocational/practical nurse; NHSN, National Healthcare Safety Network; NQF, National Quality Forum; PICU, pediatric intensive care unit; RACHS-1, risk adjustment for congenital heart surgery; RN, registered nurse; UAP, unlicensed assistive personnel.
Three approaches have been used for analyzing and reporting data. The classic approach from a research paradigm has been applied to quality data for statistically comparing trends over time, and differences before and after an intervention. P-values are interpreted as being significant if ≤0.05, which suggests that the likelihood of seeing a difference as extreme as observed has a probability of ≤5% (type I error). Another approach from an improvement science paradigm uses techniques such as run charts and control charts to identify special cause variation. In the context of quality improvement, special cause variation in the desired direction is the intent, and these analytic techniques allow improvers to quickly recognize statistically significant changes in system performance over time. Lastly, quality data also have been reported on an individual patient level. This has gained popularity in the patient safety arena, where identifying individual patient events in the form of descriptive analysis (stories ) may be more powerful in motivating a culture of change than statistical reporting of aggregate data in the form of rates of adverse patient safety events (see Chapter 5 ).
There is an increasing emphasis on quality reporting in the United States. Many states have mandatory policies for the reporting of quality data. This reporting may be tied to reimbursement using the policy of P4P, which implies that reimbursements by insurers to hospitals and physicians will be partially based on the quality metrics. P4P can include both incentives and disincentives. Incentives relate to additional payments for meeting certain quality thresholds. Disincentives relate to withholding certain payments for not meeting those quality thresholds. An extension of the P4P concept relates to the implementation of the policy of nonreimbursable hospital-acquired conditions , formerly called never events by the Centers for Medicare and Medicaid Services. CMS has identified a list of hospital-acquired conditions, which are specific quality events that will result in no payment for care provided to patients, such as wrong-site surgery, catheter-associated bloodstream infection (CA-BSI), and decubitus ulcers. This approach has not yet been widely implemented for pediatric patients.
Quality reporting is also being used in a voluntary manner as a business growth strategy. Leading U.S. children's hospitals actively compete to have high ratings in national quality evaluations that are reported in publications such as Parents (formerly Child) magazine and US News & World Report . Many children's hospitals have also developed their own websites for voluntarily reporting their quality information for greater transparency. Although greater transparency may provide a competitive advantage to institutions, the underlying goal of transparency is to improve the quality of care being delivered, and for families to be able to make informed choices in selecting hospitals and physicians for their children.
Quality measures may also be used for purposes of certifying individual physicians as part of the Maintenance of Certification process. In the past, specialty and subspecialty certification in medicine, including pediatrics, was largely based on demonstrating a core fund of knowledge by being successful in an examination. No specific evidence of competency in actual practice needed to be demonstrated beyond successful completion of a training program. There continues to be significant variations in practice patterns even among physicians who are board certified, which highlights the concept that medical knowledge is important, but not sufficient for the delivery of high-quality care. Subsequently, the American Board of Medical Specialties, including its member board, the American Board of Pediatrics, implemented the MOC process in 2010. Within the MOC process, there is a specific requirement (Part IV) for the physician to demonstrate the assessment of quality of care and implementation of improvement strategies as part of recertification in pediatrics and pediatric subspecialties. Lifelong learning and the translation of learning into practice are the basis for the MOC process and an essential competency for physicians’ professionalism. There are also discussions to adopt a similar requirement for Maintenance of Licensure for physicians by state medical regulatory boards.
The Accreditation Council for Graduate Medical Education requires residency programs to incorporate QI curriculum to ensure that systems-based practice and QI are part of the overall competencies within accredited graduate medical training programs. One form of continuing medical education, performance improvement , is used for ongoing physician education. These initiatives require physicians to measure the quality of care they deliver to their patients, to compare their performance to peers or known benchmarks, and to work toward improving their care by leveraging QI methods. This forms a feedback loop for continued learning and improvement in practice.
Prior to comparing quality measures data both within and across clinical settings, it is important to perform risk adjustment to the extent that is feasible. Risk adjustment is the statistical concept that utilizes measures of underlying severity or risk so that the outcomes can be compared in a meaningful manner. The importance of risk adjustment was highlighted in the PICU setting many years ago. The unadjusted mortality rate for large tertiary care centers was significantly higher than that for smaller hospital settings. By performing severity of illness risk adjustment, it was subsequently shown that the risks in tertiary care, large PICUs were higher because patients had higher levels of severity of illness. Although this concept is now intuitive for most clinicians, the use of severity of illness models in this study allowed a mathematical estimate of patient severity using physiologic and laboratory data, which allowed for the statistical adjustment of outcomes. This permits meaningful comparisons of the outcomes of large and small critical care units. Severity of illness models and the concepts of statistical risk adjustment are most developed in pediatric critical care. However, these concepts are relevant for all comparisons of outcomes in the hospital settings where sicker patients may be transferred to the larger institutions for care, and therefore would be expected to have poorer outcomes than other settings with less sick patients.
Risk adjustment can be performed at 3 levels. First, patients who are sicker can be excluded from the analysis, thereby allowing the comparisons to be within homogeneous groups. Although this approach is relatively simple to use, it is limited in that it would result in patient groups being excluded from the analysis. Second, risk stratification can be performed using measures of patient acuity; for example, in the All-Patient Refined Diagnosis-Related Group system, patients can be grouped or stratified into different severity criteria based on acuity weights. This approach may provide more homogeneous strata within which comparisons can be performed. Third, severity of illness risk adjustment can use clinical data to predict the outcomes for patient groups, such as the Pediatric Risk of Mortality (PRISM) scoring system in the PICU setting. In the PRISM score and its subsequent iterations, physiologic and laboratory perimeters are weighted on a statistical logistic scale to predict mortality risk within that PICU admission. By comparing the observed and expected outcomes (i.e., mortality or survival), a quantitative estimate of the performance of that PICU can be established, which can then be used to compare outcomes with other PICUs (standardized mortality ratio).
Risk adjustment systems have been effectively incorporated into specialty databases. For example, the Virtual Pediatric Intensive Care Unit System (VPS) represents the pediatric critical care database system in the United States. Comprising >100 PICUs and pediatric cardiac ICUs across the United States, as well as international PICUs, the VPS currently has >300,000 patients within its database. The VPS database emphasizes data validity and reliability to ensure that the resulting data are accurate. Data validity has been established using standard data definitions with significant clinical input. Data reliability is established using interrater reliability to ensure that the manual data collection that involves several data collectors within pediatric institutions is consistent. The PRISM scoring system is programmed into the VPS software to allow the rapid estimation of the severity of illness of individual patients. This in turn allows risk adjustment of the various outcomes, which are compared within institutions over time and across institutions for purposes of QI.
Regarding quality of healthcare for children, healthcare reform had 3 key implications. First, expanded insurance coverage optimized access and include expanded coverage for young adults to age 26 yr. Second, various initiatives related to quality, safety, patient-centered outcomes research, and innovation were implemented and funded. For example, the Agency for Healthcare Research and Quality (AHRQ) funded a national effort to establish 7 centers of excellence through the Pediatric Quality Measurement Program (PQMP) to improve existing pediatric quality measures and create new measures that can be used by states and in a variety of other settings to evaluate quality of care for children. Third, reform advocated a paradigm shift in the existing model of healthcare delivery from vertical integration toward a model of horizontal integration. This has led to the creation and rapid growth of integrated delivery systems and risk-sharing relationships of accountable care organizations (ACOs) . Population health outcomes from these changes remain uncertain, although it appears healthcare cost inflation may have slowed somewhat.
Another area of increasing emphasis is population health . This is important because it expands the traditional role of physicians to improve quality of care for individual patients also to improve the quality of care for larger populations. Populations can be defined by geographic constraints or disease/patient condition. Efforts to link payment and reimbursement for care delivery by physicians and health systems are being increasingly tied to measurable improvements in population health. To achieve a meaningful improvement in population outcome, physician practices will need to embrace the emerging paradigm of practice transformation, whose many facets include the adoption of a medical home , the seamless connectivity across the primary care and subspecialty continuum, and a strong connection between the medical and social determinants of healthcare delivery. To implement successful practice transformation, hospitals are increasingly adopting a broader view to evolve into healthcare systems that serve children across the entire range of the care continuum, including preventive and primary care, acute hospital care, and partnerships with community organizations for enhancing the social support structure. In addition, new risk-sharing payment models are evolving, resulting in the growth of entities such as ACOs, which represent a financial risk-sharing model across primary and subspecialty care and hospitals.
Health information technology (HIT) is a critical component in the effort to improve quality. HIT includes electronic health records, personal health records, and health information exchange. The purpose of a well-functioning electronic health record is to allow collection and storage of patient data in an electronic form, to allow this information to be efficiently provided to clinicians and healthcare providers, to have the ability to allow clinicians to enter patient care orders through the computerized physician order entry, and to have the infrastructure to provide clinical decision support that will improve physician decision-making at the individual patient level. The personal health record allows patients and families to be more actively engaged in managing their own health by monitoring their clinical progress and laboratory information, as well as to communicate with their physicians to make appointments, obtain medications, and have questions answered. Appropriate, timely, and seamless sharing of patient information across physician networks and healthcare organizations is critical to quality care and to achieve the full vision of a medical home for children. Health information exchange allows the sharing of healthcare information in an electronic format to facilitate the appropriate connections between providers and healthcare organizations within a community or region.
Despite the success of individual QI projects, the overall progress to achieve large-scale improvements to reach all children across the spectrum of geographic location and SES remains limited. This contributes to the health disparities that persist for children, with significant differences in access and quality of care. A potential factor that limits the full impact of QI is the lack of strategic alignment of improvement efforts with hospitals, health systems, and across states.
This challenge can be viewed from a system standpoint in being able to conduct and expand QI from a micro level (individual projects), to the meso level (regional), to the macro level (national and international). The learning from individual QI projects for addressing specific challenges can be expanded to the regional level by ensuring that there is optimal leadership, opportunity for education, and adoption of improvement science (Fig. 4.10 ). To further expand the learning to a national and international level, it is important to leverage implementation science to allow a strategic approach to the identification of the key factors that influence success. To leverage fully the synergies in order to impact the quality of care delivered to children, it is important for national and international healthcare organizations to collaborate effectively from a knowledge management and improvement standpoint (Table 4.3 ).
Table 4.3