Sie sind auf Seite 1von 64

Watson Health

15 Top Health Systems


Study, 2018
10th edition | April 23, 2018
IBM Watson Health™
75 Binney Street
Cambridge, MA 02142
800-525-9083
ibm.com/watsonhealth

Watson Health 15 Top Health Systems Study, 2018; 10th edition

© Copyright IBM Corporation 2018. All rights reserved. IBM, the IBM logo, ibm.com,
Watson Health, and 100 Top Hospitals are trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service
names might be trademarks of IBM or other companies.

Printed and bound in the United States of America.

The information contained in this publication is intended to serve as a guide for general
comparisons and evaluations, but not as the sole basis upon which any specific
conduct is to be recommended or undertaken.

The reader bears sole risk and responsibility for any analysis, interpretation, or
conclusion based on the information contained in this publication, and IBM shall not
be responsible for any errors, misstatements, inaccuracies, or omissions contained
herein. No part of this publication may be reproduced or transmitted in any form or
by any means, electronic or mechanical, including photocopying, recording, or by
any information storage and retrieval system, without permission in writing from
IBM Watson Health.

ISBN: 978-1-57372-475-3
Introduction
Contents
Welcome to the 10th edition of the
03 Introduction Watson Health 15 Top Health Systems study.
09 2018 15 Top Health Systems
award winners This 2018 study from IBM Watson Health™
11 Findings marks another milestone in the 100 Top Hospitals®
21 Methodology program’s rich history: a full decade of publishing
37 Appendix A: an annual quantitative study designed to shine
Health system winners and a light on the nation’s highest-performing
their member hospitals health systems.
41 Appendix B:
The top quintile: Highest- Our research of US health system performance
performing health systems began with the same goal that has driven each
43 Appendix C: study since the beginning of the 100 Top Hospitals
Methodology details program: To identify top performers, and also
57 Appendix D: deliver insights that may help healthcare systems
All health systems in study better focus their improvement initiatives on
achieving consistent, balanced, and sustainable
high performance.

Truven Health Analytics® was acquired by Health systems do not apply for our 15 Top Health
IBM in 2016 to help form a new business, Systems selection process, and winners do not pay
Watson Health. to market their honor.

Illuminating achievement for a


value-based world
Our research is based on clinical, operational,
financial, and patient perception-of-care measures
that form a balanced scorecard. For 10 years,
the health systems achieving excellence on our
scorecard inherently set attainable benchmarks for
others in the industry to aspire to over time.

Providing these measures of success may


be especially important today as we see the
healthcare landscape continuing to evolve from
fee-for-service toward value-based care models,
with many health systems paying closer attention
to population health management and system-
wide alignment of performance.

3
The Watson Health By finding ways to take balanced performance to
15 Top Health Systems the next level, the winners of our 15 Top Health
scorecard results are divided Systems award are identifying opportunities to
into two separate sections deliver healthcare value to patients, communities,
that graphically illustrate: and payers. The performance levels achieved by
––A health system’s these systems may motivate their peers to use
performance and data, analytics, and benchmarks to close their
improvement versus peer performance gaps.
health systems
––Cross-system performance
alignment of member Delivering a robust and transparent assessment
hospitals
We have designed this study to provide a view
of health system performance across multiple
dimensions: how they stand compared to peers
and high performers (whole-system performance),
where they stand in the evolution of their own
cultures of performance improvement (relative
long-term improvement and rate of improvement),
and the achievement of cross-system performance
alignment (member hospital performance).

These collective insights may be used by health


system leaders to adjust continuous improvement
targets, enrich the collaboration of member
hospitals, and track system-wide alignment toward
common performance goals.

To maintain the 15 Top Health Systems study’s


integrity and avoid bias, we use public data sources
and explain the methodologies we use to calculate
outcome metrics. This approach supports inclusion
of systems across the country and facilitates
consistency of definitions and data.

Our national balanced scorecard, based on Kaplan


and Norton’s concept1, is the foundation of our
research. It is comprised of key measures of
performance: inpatient and extended care quality,
operational efficiency, financial health*, and
patient experience. The composite score derived
from these measures reflects top performance in
hospital-based care, management, and leadership.

* In this study, measures of financial health are reported for information-only purposes because public, audited financial statements are not available for all US health systems.

4 IBM Watson Health


In addition, to support consideration of the Comparing the performance of our
differences in scale among systems, the study 2018 winners to nonwinners
categorizes the nation’s systems into three groups: Using the measures in our national balanced
large, medium, and small health systems. This scorecard, this year’s 15 Top Health Systems study
produces benchmarks that are comparable and revealed important differences between award
action-driving across similar systems. winners and their nonwinning peers.

Our study’s highest-performing systems:


Yielding a measure of leadership excellence
–– Had lower inpatient mortality and fewer patient
Since 1993, the 100 Top Hospitals program has
complications, considering patient severity
sought to shed light on the efficacy of innovative
leaders. The methodology is aimed at identifying –– Delivered care that resulted in fewer HAIs
leaders who can transform an organization by
–– Had lower 30-day readmission rates*
pinpointing improvement opportunities and
adjusting goals for key performance domains. –– Sent patients home sooner
We believe that higher composite scores on
–– Provided faster emergency care
the balanced scorecard typically indicate more
effective leadership and a consistent delivery –– Kept episode-of-care expenses low, both
of value. in-hospital and through the aftercare process
–– Scored higher on patient ratings of their overall
The leadership of today’s health systems, including
hospital experience
the board, executive team, and medical staff
leadership, is responsible for ensuring all of their
Our study projections also indicate that if the
system hospitals are performing at similarly high
benchmarks of performance established by our
levels in both the short and long term. The 15 Top
2018 winners were achieved by all US health
Health Systems study and analytics provide a view
systems we studied, the following would be true:
of that enterprise performance alignment.
–– Over 60,000 additional lives could be saved
in-hospital
Providing new insight into clinical quality
–– Over 31,000 additional patients could be
For this 2018 15 Top Health Systems study, we complication-free
added a newly available measure of clinical quality:
healthcare-associated infections (HAIs). Since –– 16% fewer infections would be acquired by
there is public interest in tracking and preventing hospital patients
hospital-acquired infections, we used the HAI data ––The typical patient could be released from the
reported by the Centers for Medicare & Medicaid hospital almost a half day sooner and would
Services to analyze hospital performance and have 5.6% fewer expenses related to the
provide national benchmarks in this area. complete episode of care than the median
patient in the US
–– Patients could spend 40 minutes less time in
hospital emergency rooms per visit

* 30-day mortality was slightly higher for winning health systems in this year’s study.

5
Understanding the similarities The analysis on the previous page is based on
and differences between applying the difference between study winners
high and low performers can and nonwinners to Medicare patient counts. If the
help provide benchmarks for same standards were applied to all inpatients, the
the industry. The findings we impact would be even greater.
assemble for this study provide
examples of excellence, as For more details about this study’s findings and
evidenced in several additional the achievements of the 15 Top Health Systems,
published studies2 - 24. please see the Findings section of this document.

Welcoming your input


The 100 Top Hospitals program works to ensure
that the measures and methodologies used in our
studies are fair, consistent, and meaningful. We
continually test the validity of our performance
measures and data sources. In addition, as part of
our internal performance improvement process, we
welcome comments about our study from health
system, hospital, and physician executives. To
submit comments, visit 100tophospitals.com.

Showcasing the versatility of the


100 Top Hospitals program
The 15 Top Health Systems research is one of three
major annual studies of the Watson Health 100 Top
Hospitals program. To increase understanding of
trends in specific areas of the healthcare industry,
the program includes:
–– 100 Top Hospitals and Everest Award studies
Research that annually recognizes the
100 top-rated hospitals in the nation based
on a proprietary, balanced scorecard of
overall organizational performance, and
identifies those hospitals that also excel at
long-term rates of improvement in addition
to performance
–– 50 Top Cardiovascular Hospitals study
An annual study introduced in 1999 that
identifies hospitals demonstrating the highest
performance in hospital cardiovascular
services for four important patient groups:
heart attack, heart failure, coronary artery
bypass graft, and percutaneous coronary
intervention

6 IBM Watson Health


–– 15 Top Health Systems study
An annual study introduced in 2009 that
The 2018 study
provides an objective measure of health analyzed 338
system performance overall and offers insight
into the ability of a system’s member hospitals
health systems and
to deliver consistent top performance across 2,422 hospitals that
the communities they serve, all based on our
national health system scorecard are members of
health systems.
In addition to the major studies, customized
analyses are also available from the 100 Top
Hospitals program, including custom benchmark
reports. Our reports are designed to help
healthcare executives understand how their
organizational performance compares to peers
within health systems, states, and markets.

100 Top Hospitals program reports offer a


two-dimensional view of both performance
improvement over time, applying the most
current methodologies across all years of data to
identify trends, as well as the most current year
performance.

You can read more about these studies, order


customized reports, and view lists of all winners by
visiting 100tophospitals.com.

About IBM Watson Health


Each day, professionals throughout the health
ecosystem make powerful progress toward a
healthier future. At IBM Watson Health, we help
them remove obstacles, optimize efforts, and
reveal new insights to support the people they
serve. Working across the landscape, from payers
and providers to governments and life sciences,
we bring together deep health expertise; proven
innovation; and the power of artificial intelligence
to enable our customers to uncover, connect, and
act as they work to solve health challenges for
people everywhere.

For more information, visit ibm.com/watsonhealth.

7
Note that the order of health systems in the
2018 following tables does not reflect performance
rating. Systems are ordered alphabetically.
15 Top Health For full details on these peer groups and the
process we used to select the winning benchmark
Systems award health systems*, see the Methodology section of
this document.
winners
The Watson Health 100 Top Hospitals® program
is pleased to present the 2018 Watson Health
15 Top Health Systems.

15 Top Health Systems award winners


Large health systems (> $1.85 billion) Location
Mayo Foundation Rochester, MN
Mercy Chesterfield, MO
Sentara Healthcare Norfolk, VA
St. Luke’s Health System Boise, ID
UCHealth Aurora, CO

Medium health systems ($800 million - $1.85 billion) Location


Aspirus Wausau, WI
HealthPartners Bloomington, MN
Mercy Health - Cincinnati Cincinnati, OH
Mission Health Asheville, NC
TriHealth Cincinnati, OH

Small health systems (< $800 million) Location


Asante Medford, OR
CHI St. Joseph Health Bryan, TX
Maury Regional Health Columbia, TN
Roper St. Francis Healthcare Charleston, SC
UPMC Susquehanna Health System Williamsport, PA

* To see a full list of Winners Through the Years, visit truvenhealth.com/Products/100-Top-Hospitals/Program-Info/15-Top-Health-Systems/Winners-Through-the-Years.

9
Note: In Tables 1 through 4, data for the
Findings 15 Top Health Systems award winners is labeled
“Benchmark,” and data for all health systems,
The Watson Health 15 Top Health Systems study excluding award winners, is labeled “Peer group.”
profiles the top-performing health systems* in the In columns labeled “Benchmark compared
country. According to publicly available data and with peer group,” we calculated the actual and
our transparent methodologies, these industry percentage difference between the benchmark
leaders appear to be successfully addressing the hospital scores and the peer group scores.
challenge of deploying innovative clinical and
operational approaches to multiple hospital sites 15 Top Health Systems had better survival rates**
to achieve consistent top performance. ––The winners had 14.6% fewer in-hospital
deaths than their nonwinning peers,
For 10 years, the 15 Top Health Systems study considering patient severity (Table 1)
has followed the results achieved by leading –– Mortality results for medium health systems
health systems and published numerous showed the greatest difference between
examples of the benchmark systems’ clinical winners and nonwinners, with 15.9% fewer
and operational excellence. The study is more deaths among benchmark health systems
than a list of accomplishments; it is a tool US (Tables 2 - 4)
health system leaders can use to help guide their
own performance improvement initiatives. By 15 Top Health Systems had fewer
highlighting what the highest-performing leaders patient complications**
around the country are doing well, we create
aspirational benchmarks for the rest of the industry. –– Patients treated at the winning systems’
member hospitals had significantly fewer
complications, with rates 17.3% lower than
How the winning systems compared at nonwinning system hospitals, considering
to their peers patient severity (Table 1)
In this section, we show how the 15 Top Health –– Large health systems had the greatest
Systems performed within their comparison groups difference between winners and nonwinners,
(large, medium, and small systems), compared with 20.9% fewer complications (Tables 2 - 4)
to nonwinning peers. In addition, we identify
some key findings among comparison groups. For
performance measure details and definitions of
each comparison group, see the Methodology
section of this document.

* To be defined as a health system in this study, an organization must have at least two short-term, general, acute care hospitals with separate Medicare provider identification
numbers. Systems with multiple hospital facilities reporting under one provider ID are profiled as a single hospital in the Watson Health 100 Top Hospitals® study.
** Mortality and complications index values cannot be compared among the three different comparison groups because they are normalized by comparison group.

11
15 Top Health Systems had fewer healthcare- 15 Top Health Systems had mixed results on
associated infections* longer-term outcomes
A new ranked measure in the 2018 study, Several patient groups are included in the 30-
healthcare-associated infections (HAIs)**, day mortality and readmission extended care
captures information about the quality of inpatient composite metrics. The mean 30-day mortality
care. Based on nationwide data availability, we rate includes heart attack (AMI), heart failure (HF),
built a composite measure of HAI performance pneumonia, chronic obstructive pulmonary disease
at the system level, considering up to six HAIs, (COPD), and stroke patient groups. The mean 30-
depending on assigned comparison group. The day readmission rate includes AMI, HF, pneumonia,
six reported HAIs are: methicillin-resistant total hip arthroplasty and/or total knee arthroplasty
staphylococcus aureus (MRSA-bloodstream), (THA/TKA), COPD, and stroke patient groups.
central line-associated blood stream infections,
catheter-associated urinary tract infections, 30-day mortality results
clostridium difficile (C. diff), surgical site infections
–– In this year’s study, the winning systems had
(SSIs) following colon surgery, and SSIs following
a higher mean 30-day mortality rate than the
an abdominal hysterectomy.
nonwinning systems, due to slightly higher
–– Among all types of systems, winners overall rates among large and small systems
had a more favorable composite median HAI (Tables 1 - 4)
index value than nonwinner peers, at 0.77
–– Small health systems displayed the largest
versus 0.92, respectively; this reflects 16.2%
gap between winners and nonwinning peers
fewer infections occurring at the 15 Top Health
on 30-day mortality (13.4% versus 12.9%),
Systems compared to other peer systems
while medium health system winners had
(Table 1)
performance the same as nonwinning peers,
–– Small health system winners and nonwinners at 12.7%, which was also the lowest median
showed the most dramatic difference on 30-day mortality value observed in this year’s
HAI performance: winners had a median HAI study (Tables 1, 3, and 4)
composite index value of 0.64, which was
26.5% lower than the median HAI index score 30-day readmission results
at nonwinning systems (0.87) (Tables 2 - 4)
–– Winning health systems had lower 30-day
readmission rates than their nonwinning peers
nationally (0.6 percentage points lower) and for
all comparison groups (Table 1)
–– Small winning systems had the best mean
30-day readmission rate (14.4%) among all
comparison groups and outperformed their
nonwinning peers by the greatest margin,
0.9 percentage points (Tables 2 - 4)

* HAI index values cannot be compared among the three different comparison groups because they are normalized by comparison group.
** As developed by a unit of the Centers for Disease Control and Prevention, the National Healthcare Safety Network, and reported by the Centers for Medicare & Medicaid Services
(CMS) in the public Hospital Compare data set.

12 IBM Watson Health


Patients treated in 15 Top Health Systems 15 Top Health Systems hospitals had lower
hospitals returned home sooner* Medicare spending per beneficiary episode costs
–– Winning systems had a median average length –– Overall, winning systems had a 5.6% lower
of stay (ALOS) of 4.4 days, nearly a half day Medicare spending per beneficiary (MSPB)
shorter than the nonwinner median of 4.9 days index than nonwinners (Table 1)
(Table 1)
–– Medium health systems showed the greatest
––The ALOS difference between winners difference between winners and nonwinners
and nonwinners was consistent across all with an 11.8% lower MSPB index (Table 2)
comparison groups, with benchmark systems
–– Medium winning systems also had the lowest
discharging patients 0.4 days sooner
average MSPB index (0.88) among the
(Tables 2 - 4)
comparison groups (Table 2)
Patients spent less time in 15 Top Health Systems
Patients rated 15 Top Health Systems hospitals
emergency departments
higher than peer system hospitals
The mean emergency department (ED) throughput
composite metric measures the amount of time –– Winning systems had a 2.3% higher overall
spent in the ED. It includes median time from ED score on the Hospital Consumer Assessment of
arrival to ED departure for admitted ED patients, Healthcare Providers and Systems (HCAHPS),
and median time from ED arrival to ED departure which tells us that patients treated by
for non-admitted ED patients. members of the top health systems reported a
better overall hospital experience than those
–– Overall, winning systems had, on average, 40
treated in nonwinning peer hospitals (Table 1)
minutes shorter ED wait times per patient than
nonwinners (Table 1) –– Small health system winners had the best
HCAHPS score (270.3) among the comparison
––The greatest difference between winning
groups (Tables 2 - 4)
systems and their peers was in the medium
health systems comparison group, with ––The small winning systems also had the biggest
medium system winners averaging 40.2 lead over nonwinning peers, with an HCAHPS
minutes less time spent in the ED per patient score that was 3.1% higher (Tables 2 - 4)
visit than nonwinners; the range of time
saved was between 34.8 and 40.2 minutes
(Tables 2 - 4)

* ALOS cannot be compared among the three different comparison groups because values are normalized by comparison group.

13
Table 1. National health system performance comparisons (all systems)
Performance measure Medians Benchmark compared with peer group
Winning Nonwinning Difference Percent Comments
benchmark peer group difference
health of US Health
systems systems

Inpatient mortality index1 0.88 1.03 -0.15 -14.6% lower mortality


Complications index1 0.85 1.02 -0.18 -17.3% fewer complications
Healthcare-associated infection (HAI) index 2
0.77 0.92 -0.15 -16.2% fewer infections
30-day mortality rate (%) 3
12.9 12.7 0.1 n/a4
higher 30-day mortality
30-day readmission rate (%) 3
14.7 15.3 -0.6 n/a4
fewer 30-day readmissions
Average length of stay (ALOS) (days)1 4.4 4.9 -0.4 -8.8% shorter stays
Emergency department (ED) throughput measure2 180.3 220.3 -40.0 -18.2% less time to service
Medicare spending per beneficiary (MSPB) index 2
0.94 0.99 -0.06 -5.6% lower episode cost
Hospital Consumer Assessment of Healthcare 270.1 264.0 6.1 2.3% better patient experience
Providers and Systems (HCAHPS) score2

1. Mortality, complications and average length of stay based on Present on Admission (POA)-enabled risk models applied to MedPAR 2015 and 2016 data (ALOS 2016 only).
2. HAI, ED measure, MSPB and HCAHPS data from CMS Hospital Compare Jan 1, 2016 - Dec 31, 2016 data set.
3. 30-day rates from CMS Hospital Compare July 1, 2013-June 30, 2016 data set.
4. We do not calculate percent difference for this measure because it is already a percent value.
NNote: Measure values are rounded for reporting, which may cause calculated differences to appear off.

Table 2. Large health system performance comparisons


Performance measure Medians Benchmark compared with peer group
Benchmark Peer group Difference Percent Comments
health of US health difference
systems systems

Inpatient mortality index1 0.89 1.00 -0.11 -11.0% Lower mortality


Complications index 1
0.82 1.04 -0.22 -20.9% Fewer complications
HAI index 2
0.79 0.93 -0.14 -14.9 Fewer infections
30-day mortality rate (%)3 12.8 12.7 0.1 n/a4 Higher 30-day mortality
30-day readmission rate (%)3 14.6 15.3 -0.7 n/a4 Fewer 30-day readmissions
ALOS (days) 1
4.4 4.9 -0.4 -8.9% Shorter stays
ED throughput measure 2
189.9 229.6 -39.7 -17.3% Less time to service
MSPB index 2
0.94 0.99 -0.05 -5.5% Lower episode cost
HCAHPS score2 270.1 264.0 6.0 2.3% Better patient experience

1. Mortality, complications, and ALOS based on POA-enabled risk models applied to MEDPAR 2015 and 2016 data (ALOS 2016 only).
2. HAI, ED measure, MSPB, and HCAHPS data from CMS Hospital Compare Jan. 1, 2016 - Dec. 31, 2016 data set.
3. 30-day rates from CMS Hospital Compare July 1, 2013 - June 30, 2016 data set.
4. We do not calculate percent difference for this measure because it is already a percent value.
Note: Measure values are rounded for reporting, which may cause calculated differences to appear off.

14 IBM Watson Health


Table 3. Medium health system performance comparisons
Performance measure Medians Benchmark compared with peer group
Benchmark Peer group Difference Percent Comments
health of US health difference
systems systems

Inpatient mortality index1 0.85 1.01 -0.16 -15.9% Lower mortality


Complications index 1
0.85 1.03 -0.18 -17.0% Fewer complications
HAI index 2
0.77 0.92 -0.15 -16.5% Fewer infections
30-day mortality rate (%)3 12.7 12.7 0.0 n/a4 No difference in 30-day mortality
30-day readmission rate (%)3 14.7 15.2 -0.5 n/a4 Fewer 30-day readmissions
ALOS (days) 1
4.5 4.9 -0.4 -7.6% Shorter stays
ED throughput measure 2
181.0 221.2 -40.2 -18.2% Less time to service
MSPB index 2
0.88 1.00 -0.12 -11.8% Lower episode cost
HCAHPS score2 269.2 265.0 4.2 1.6% Better patient experience

1. Mortality, complications, and ALOS based on POA-enabled risk models applied to MEDPAR 2015 and 2016 data (ALOS 2016 only).
2. HAI, ED measure, MSPB, and HCAHPS data from CMS Hospital Compare Jan. 1, 2016 - Dec. 31, 2016 data set.
3. 30-day rates from CMS Hospital Compare July 1, 2013 - June 30, 2016 data set.
4. We do not calculate percent difference for this measure because it is already a percent value.
Note: Measure values are rounded for reporting, which may cause calculated differences to appear off.

Table 4. Small health system performance comparisons


Performance measure Medians Benchmark compared with peer group
Benchmark Peer group Difference Percent Comments
health of US health difference
systems systems

Inpatient mortality index1 0.99 1.03 -0.04 -3.4% Lower mortality


Complications index 1
0.83 1.00 -0.17 -17.1% Fewer complications
HAI index 2
0.64 0.87 -0.23 -26.5% Fewer infections
30-day mortality rate (%)3 13.4 12.9 0.5 n/a4 Higher 30-day mortality
30-day readmission rate (%)3 14.4 15.3 -0.9 n/a4 Fewer 30-day readmissions
ALOS (days) 1
4.5 4.9 -0.4 -8.7% Shorter stays
ED throughput measure 2
177.3 212.2 -34.8 -16.4% Less time to service
MSPB index 2
0.95 0.99 -0.04 -4.0% Lower episode cost
HCAHPS score2 270.3 262.2 8.1 3.1% Better patient experience

1. Mortality, complications, and ALOS based on POA-enabled risk models applied to MEDPAR 2015 and 2016 data (ALOS 2016 only).
2. HAI, ED measure, MSPB, and HCAHPS data from CMS Hospital Compare Jan. 1, 2016 - Dec. 31, 2016 data set.
3. 30-day rates from CMS Hospital Compare July 1, 2013 - June 30, 2016 data set.
4. We do not calculate percent difference for this measure because it is already a percent value.
Note: Measure values are rounded for reporting, which may cause calculated differences to appear off.

15
Appendix A.

16 IBM Watson Health


Winning health system results

For a list of all hospitals included


In Table 5, we provide the 15 Top
Health Systems values for each of

in each winning health system, see


the study’s performance measures.

Table 5. Winning health system performance measure results


Winning system name Mortality Complications HAI index2 30-day 30-day ALOS1 ED MSPB HCAHPS
index1 index1 mortality readmission measure index4 score4
rate3 rate3 mean
minutes4
Large health Mayo Foundation 0.82 1.23 0.7 11.9 14.6 4.5 166.6 0.92 278.9
systems
Mercy 0.89 0.72 0.8 12.8 14.9 4.4 174.6 0.96 269.9
Sentara Healthcare 0.87 0.82 0.9 12.9 15.2 4.7 235.8 0.94 270.1
St. Luke's Health System 0.96 0.74 0.8 12.5 14.1 4.1 206.3 0.88 268.9
UCHealth 0.91 0.88 0.9 12.9 14.1 4.1 189.9 0.98 271.5
Medium health Aspirus 0.98 0.71 0.6 13.3 14.8 3.9 154.6 0.69 269.2
systems
HealthPartners 0.87 1.08 0.8 12.7 14.7 4.3 174.8 0.88 268.7
Mercy Health - Cincinnati 0.68 0.85 0.5 12.0 15.4 4.6 210.0 1.00 271.6
Mission Health 0.82 1.02 1.0 13.0 13.2 4.5 195.1 0.87 275.2
TriHealth 0.85 0.83 0.8 12.3 14.7 4.5 181.0 0.99 267.3
Small health Asante 0.99 0.53 0.6 13.1 14.3 4.5 185.0 0.91 270.3
systems
CHI St. Joseph Health 0.73 0.88 0.5 14.1 15.3 4.6 166.6 0.98 273.0
Maury Regional Health 1.05 0.51 0.5 13.6 15.5 3.9 177.3 0.95 269.0
Roper St. Francis Healthcare 1.06 0.83 1.1 12.0 13.9 4.8 159.2 0.99 276.9
UPMC Susquehanna Health System 0.74 1.09 0.7 13.4 14.4 4.3 180.3 0.79 267.0

1. Mortality, complications, and ALOS based on POA-enabled risk models applied to MEDPAR 2015 and 2016 data (ALOS 2016 only).
2. HAI, ED measure, MSPB, and HCAHPS data from CMS Hospital Compare Jan. 1, 2016 - Dec. 31, 2016 data set.
3. 30-day rates from CMS Hospital Compare July 1, 2013 - June 30, 2016 data set.
Note: Mortality, complications, HAI and ALOS measures cannot be compared across comparison groups because they are normalized by comparison group.
Top and bottom quintile results ––They had somewhat lower mean 30-day
We divided all the health systems in this study into mortality rates (0.4 percentage points lower;
performance quintiles, by comparison group, based includes AMI, HF, pneumonia, COPD, and
on their performance on the study’s measures. stroke patients)
In Table 6, we have highlighted differences ––They had lower mean 30-day readmission
between the highest- and lowest-performing rates (14.8% versus 15.5%; includes AMI,
quintiles by providing their median scores on the HF, pneumonia, THA/TKA, COPD, and
study performance measures. (See Appendix B stroke patients)
for a list of the health systems included in the
top-performance quintile and Appendix D for all ––They had dramatically lower mean ED wait
systems included in the study.) times, with an average difference of 62
minutes per ED patient (24.1% less than the
The top quintile systems outperformed their lowest bottom quintile)
quintile peers in the following ways: ––They were more efficient, releasing patients
––They had much better patient outcomes: almost one day (0.9) sooner than the lowest
19.7% lower mortality and 5.7% lower performers and at a 5.3% lower MSPB index
complications ––They scored 11.3 points higher on the HCAHPS
––They had fewer occurrences of HAIs in their overall patient rating of care
facilities: a 25.8% smaller median HAI index
value (0.8) in the top performance quintile

Table 6. Comparison of health systems in the top and bottom quintiles of performance1
Performance measure Top quintile Bottom quintile Difference Percent Top versus bottom
median median difference quintile
Inpatient mortality index2 0.89 1.11 -0.22 -19.7% Lower mortality
Complications index2 0.99 1.05 -0.06 -5.7% Fewer complications
HAI index 3
0.8 1.0 -0.3 -25.8% Fewer infections
30-day mortality rate (%) 4
12.6 12.9 -0.4 n/a 5
Lower 30-day mortality
30-day readmission rate (%) 4
14.8 15.5 -0.8 n/a 5
Fewer 30-day readmissions
ALOS 2
4.5 5.4 -0.9 -16.1% Shorter stays
ED measure mean minutes3 195.0 256.9 -62.0 -24.1% Less time to service
MSPB index3 0.96 1.01 -0.05 -5.3% Lower episode cost
HCAHPS score 3
269.0 257.7 11.3 4.4% Better patient experience

1. Top and bottom performance quintiles were determined by comparison group and aggregated to calculate medians.
2. Mortality, complications, and ALOS based on POA-enabled risk models applied to MEDPAR 2015 and 2016 data (ALOS 2016 only).
3. HAI, ED measure, MSPB, and HCAHPS data from CMS Hospital Compare Jan. 1, 2016 - Dec. 31, 2016 data set
4. 30-day rates from CMS Hospital Compare July 1, 2013 - June 30, 2016, data set.
5. We do not calculate percent difference for this measure because it is already a percent value.
Note: Measure values are rounded for reporting, which may cause calculated differences to appear off.

17
Test metrics: Reported for information only Excess days in acute care measures
Every year, we evaluate the 15 Top Health Systems The newest set of measures available from CMS
study and explore whether new measures would in the Hospital Compare data set are the excess
enhance the value of the analysis we provide. days in acute care (EDAC) measures for AMI and
For this 2018 study, we are testing several HF. CMS defines “excess days” as the difference
new performance measures that update basic between a hospital’s average days in acute
standards of inpatient care and expand the care and expected days, based on an average
balanced scorecard across the continuum of care. hospital nationally. Days in acute care include
These metrics were not used in the ranking and days spent in an ED, a hospital observation unit,
selection of winning health systems. or a hospital inpatient unit for 30 days following
a hospitalization. The data period in our study for
If you would like to provide feedback on these measures is the same as for the other 30-day
the following proposed measures, email metrics for specific patient conditions: three years,
100tophospitals@us.ibm.com. combined (July 1, 2013 - June 30, 2016).

30-day all-cause, hospital-wide 90-day episode-of-care payment measure


readmission measure Another measure recently made available in the
We are continuing to publish the hospital-wide, Hospital Compare data set is the 90-day episode-
30-day readmission measure, which CMS is of-care payment metric for primary, elective
publicly reporting in the Hospital Compare data set THA/TKA. Like the other 30-day episode-of-
to provide an overall readmission comparison, for care payment measures, CMS calculates risk-
information only. However, we rank on a composite standardized payments associated with a 90-day
score based on the publicly available individual episode of care, compared to an average hospital
patient groups. The data period for the hospital- nationally. The measure summarizes payments
wide readmission measure is July 1, 2015 - for patients across multiple care settings, services,
June 30, 2016. and supplies during the 90-day period, which
starts on the day of admission. The data period for
30-day episode-of-care payment measures this measure combines three years, April 1, 2013 -
We are continuing to publish risk-standardized March 31, 2016.
payments associated with 30-day episode-of-care
measures for three patient groups that are now 90-day complication measure
being published by CMS in the Hospital Compare Along with the THA/TKA 90-day payment
data set. These measures capture differences in measure recently made available in Hospital
services and supplies provided to patients who Compare data, CMS is also publishing a THA/
have been diagnosed with AMI, HF, or pneumonia. TKA 90-day complication measure. This measure
According to the CMS definition of these measures, calculates a risk-standardized complication rate for
they are the sum of payments made for care and THA/TKA procedures using the occurrence of one
supplies starting the day the patient enters the or more of the subsequent complications within
hospital and for the next 30 days. In our study, the the specified timeframes. The data period for this
data period for these measures is the same as measure combines three years, April 1, 2013 -
for the other 30-day metrics for specific patient March 31, 2016 (complications are listed on the
conditions: three years, combined (July 1, 2013 - following page).
June 30, 2016).

18 IBM Watson Health


–– AMI, pneumonia, or sepsis/septicemia/shock –– On all reported CMS episode measures of cost,
during or within seven days of index admission winning systems consistently outperformed
their nonwinner peers; the difference was
–– Surgical site bleeding, pulmonary embolism,
greatest for THA/TKA 90-day episode payment
or death during or within 30 days of index
(5.8%)
admission
–– Winning health systems performed better
–– Mechanical complication or periprosthetic joint
on the AMI and HF 30-day EDAC measures,
infection/wound infection during or within 90
showing that patients spent fewer days than
days of index admission
expected in the ED, in observation, or back in
the hospital after an initial index acute care
See the CMS website for measure methodology25.
stay (5.2 days under the expected amount
for AMI EDAC and 8.5 under expected for HF
Table 7 shows the national performance of
EDAC); whereas nonwinning peers averaged
benchmark and peer health systems on the
7.6 and 8.1 days more than expected (EDAC
test metrics.
values are reported as excess days per 100
discharges)
This year, the 15 Top Health Systems winners
outperformed nonwinning peers on all test –– On the 90-day THA/TKA complications rate,
measures, which is an interesting finding given that another CMS measure of outcomes extended
these are independent variables not used in the outside the hospital stay, winning systems had
selection of the winners. 0.3 percentage points lower complication rate
than peers
–– Benchmark systems had stronger performance
on 30-day, hospital-wide readmissions (14.8%
versus 15.4% for peers)

Table 7. Information-only measures – Health system performance comparisons (all classes)


Performance measure Medians Benchmark compared with peer group
Benchmark Peer group Difference Percent Comments
health systems of US health difference
systems

30-day, hospital-wide readmission rate1 14.8 15.4 -0.63 n/a3 Fewer 30-day readmissions
AMI 30-day episode payment 1
$22,798 $23,204 -$406 -1.8% Lower episode cost
HF 30-day episode payment1 $16,023 $16,376 -$353 -2.2% Lower episode cost
Pneumonia 30-day episode payment 1
$17,117 $17,318 -$201 -1.2% Lower episode cost
AMI 30-day excess days in acute care 1
-5.2 7.6 -12.81 n/a 3
Fewer excess days
HF 30-day excess days in acute care 1
-8.5 8.1 -16.62 n/a 3
Fewer excess days
THA/TKA* 90-day episode payment 2
$20,653 $21,931 -$1,278 -5.8% Lower episode cost
THA/TKA* 90-day complications rate2 2.4 2.7 -0.30 n/a3 Fewer complications

1. 30-day measures from CMS Hospital Compare July 1, 2013 - June 30, 2016 data set.
2. 90-day measures from CMS Hospital Compare April 1, 2013 - March 31, 2016 data set.
3. We do not calculate percent difference for these measures because it can be a negative number or is already a percent value.
* Primary, elective total hip arthroplasty and total knee arthroplasty.

19
Financial metrics –– Overall, benchmark health system
We continue to publish the financial measures performance is better than nonwinning peers,
each year for information only, as audited financial both on operating margin (1.2 percentage
statements are not available for all systems points higher among winners) and long-term
included in the study*. These measures are not debt-to-capitalization ratio (LTD/cap)
included in the ranking and selection of benchmark (0.1 lower ratio among winners)
health systems. –– Notably, medium health system winners had
a much higher operating profit margin than
Results for included systems are found in nonwinners (8.7% versus 2.4%) and showed
Table 8 below. a greater difference on the LTD/cap (0.2
versus 0.4)

Table 8. Information-only – Financial performance


Performance measure Health system Medians Difference
comparison group
Benchmark health systems Peer group of US Benchmark compared
health systems with peer group

Operating margin All systems 4.2 3.0 1.2


(percentage)
Large 4.3 3.6 0.7
Medium 8.7 2.4 6.3
Small 4.0 2.2 1.8
Long-term debt-to- All systems 0.3 0.3 -0.1
capitalization ratio
(LTD/cap) Large 0.3 0.3 0.0
Medium 0.2 0.4 -0.1
Small 0.3 0.3 -0.1

Note: Data sourced from audited 2016 financial reports via dacbond.com, emma.msrb.org, yahoo.brand.edgar-online.com, and sec.gov.

* A total of 84.2% of parent and independent systems published audited financial statements for 2016. Subsystems that are members of a larger “parent” health system generally
do not have separate audited financial statements. This translated into 65.1% of all in-study health systems with available financial reports.

20 IBM Watson Health


–– Ranking systems on each of the performance
Methodology measures by comparison group
–– Determining the 15 top performers (five in
Watson Health 15 Top Health Systems is a each comparison group) from the health
quantitative study that annually identifies 15 systems’ overall rankings, based on their
US health systems with the highest overall aggregate performance (sum of individual
achievement on a balanced scorecard. weighted measure ranks)

The health system scorecard is based on the The following section is intended to be an overview
100 Top Hospitals® national balanced scorecard of these steps. To request more detailed information
methodologies and focuses on four performance on any of the study methodologies outlined here,
domains: inpatient outcomes, extended outcomes, email us at 100tophospitals@us.ibm.com or call
operational efficiency, and patient experience. 800-525-9083.

This 2018 health systems study includes nine


measures that provide an objective comparison Building the database of hospitals
of health system performance using publicly Like all the 100 Top Hospitals studies, the 15 Top
available data. The health systems with the highest Health Systems study uses only publicly available
achievement are those with the highest ranking on data. The data for this study primarily came from:
a composite score based on these nine measures.
–– Medicare Provider Analysis and Review
To analyze health system performance, we include (MEDPAR) data set*
data for short-term, acute care, nonfederal US –– Centers for Medicare & Medicaid Services
hospitals, as well as cardiac, orthopedic, women’s, (CMS) Hospital Compare data set
and critical access hospitals (CAHs) that are
members of the health systems. We use MEDPAR patient-level demographic,
diagnosis, and procedure information to calculate
The main steps we take in selecting the top 15 mortality, complications, and length of stay (LOS)
health systems are: by aggregating member hospital data to the health
–– Building the database of health systems, system level. The MEDPAR data set contains
including special selection and exclusion criteria information on the approximately 15 million
Medicare patients discharged annually from US
–– Identifying which hospitals are members of acute care hospitals. In this year’s study, we used
health systems the most recent two federal fiscal years (FFYs) of
–– Aggregating the patient-level and hospital- MEDPAR data available (2015 and 2016), which
level data from member hospitals and included Medicare Advantage health maintenance
calculating a set of performance measures at organization encounters. The 100 Top Hospitals
the system level program has used the MEDPAR database for
many years. We believe it to be an accurate and
–– Classifying health systems into comparison reliable source for the types of high-level analyses
groups based on total operating expense performed in this study.

* The MEDPAR data years quoted in 100 Top Hospitals research are based on an FFY, a year that begins on October 1 of each calendar year and ends on September 30 of the
following calendar year. FFYs are identified by the year in which they end (for example, FFY 2016 begins October 1, 2015, and ends September 30, 2016). Data for all CMS Hospital
Compare measures is provided in calendar years, except the 30-day rates. CMS publishes the 30-day rates as three-year combined data values. We label these data points based
on the end date of each data set. For example, July 1, 2013 - June 30, 2016, is named “2016.”

21
We used the CMS Hospital Compare data The recalibrated models were used in producing
set published in the third quarter of 2017 for the risk-adjusted inpatient mortality and
healthcare-associated infection (HAI) measures, complications indexes, based on two years of
30-day mortality rates, 30-day readmission rates, MEDPAR data (2015 and 2016). The severity-
Medicare spending per beneficiary (MSPB) index, adjusted LOS was produced based on MEDPAR
and Hospital Consumer Assessment of Healthcare 2016 data.
Providers and Systems (HCAHPS) patient
perception-of-care data26. Present-on-admission coding adjustments
From 2010 through 2016, we have observed a
We also used the 2016 Medicare cost reports, significant rise in the number of principal diagnosis
published in the federal Hospital Cost Report and secondary diagnosis codes that do not have
Information System (HCRIS) third-quarter 2017 a valid POA indicator code in the MEDPAR data
data set, to create our proprietary database for files. Since 2011, an invalid code of “0” has been
determining system membership based on “home appearing. This phenomenon has led to an artificial
office” or “related organization” relationships rise in the number of complications that appear to
reported by hospitals. The cost reports were also be occurring during the hospital stay. See Appendix
used to aggregate member hospital total operating C for details.
expense to the system level. This data was
used to classify health systems into three To correct for this bias, we adjust MEDPAR record
comparison groups. processing through our inpatient mortality and
complications risk models and LOS severity-
Risk- and severity-adjustment models adjustment model as follows:
The IBM Watson Health™ proprietary risk- and
1. Original, valid (Y, N, U, W, or 1) POA codes
severity-adjustment models for inpatient mortality,
assigned to diagnoses were retained
complications, and LOS have been recalibrated for
this study release using FFY 2015 data available 2. Where a POA code of “0” appeared, we took
in the all-payer Watson Health Projected Inpatient the next four steps
Database (PIDB). The PIDB is one of the largest
a. We treated all diagnosis codes on the
US inpatient, all-payer databases of its kind,
CMS exempt list as “exempt,” regardless
containing approximately 23 million inpatient
of POA coding
discharges annually, obtained from approximately
5,000 hospitals, which comprise more than 65% b. We treated all principal diagnoses as
of the nonfederal US market. Watson Health risk- “present on admission”
and severity-adjustment models take advantage
c. We treated secondary diagnoses where the
of available present-on-admission (POA) coding
POA code “Y” or “W” appeared more than
that is reported in all-payer data. Only patient
50% of the time in the Watson Health all-
conditions that are present on admission are
payer database as “present on admission”
used to determine the probability of death,
complications, or the expected LOS. d. All others were treated as “not present”

22 IBM Watson Health


Hospital exclusions Health system exclusions
After building the database, we exclude hospitals Health systems are excluded if:
that would have skewed the study results.
–– One or more required measures are missing*
Excluded from the study were:
–– Fewer than 50% of member hospitals have valid
–– Certain specialty hospitals (children’s,
POA coding
psychiatric, substance abuse, rehabilitation,
cancer, and long-term acute care) –– Fewer than 50% of member hospitals have valid
data for any one or more required measures**
–– Federally owned hospitals
–– Hospitals not located within the 50 states After all system exclusions were applied, 338
(such as those in Puerto Rico, Guam, and the individual health systems were included in the
US Virgin Islands) 2018 study.
–– Hospitals with Medicare average LOS longer
NOTE: CMS does not publish MSPB measures for
than 30 days in FFY 2016
Maryland hospitals due to a separate payment
–– Hospitals with no reported Medicare patient agreement. For this reason, we substituted the
deaths in FFY 2016 comparison group median, and winner-excluded
Maryland health systems that had no reported
–– Hospitals that had fewer than 60% of patient
MSPB measure to allow Maryland health systems
records with valid POA codes
to remain in the study. If a Maryland health system
included hospitals in other states, we winner-
Cardiac, orthopedic, women’s hospitals, and CAHs
excluded them when more than 50% of their
are included in the study, if they are not excluded
member hospitals had no reported MSPB measure.
for any other criteria listed above.

In addition, specific patient records are


also excluded:
–– Patients who were discharged to another
short-term facility (this is done to avoid
double-counting)
–– Patients who were not at least 65 years old
–– Rehabilitation, psychiatric, and substance
abuse patients
–– Patients with stays shorter than one day

After all exclusions were applied, 2,422 individual


hospitals were included in the 2018 study.

* For composite measures (HAI, 30-day mortality, 30-day readmissions), the exclusion is applied ONLY if all individual measure comprising the composite are missing.
- For HAI, different numbers of individual measures were required depending on the comparison group (five for large and medium systems; three for small systems). A system not
meeting the minimum was excluded. See Appendix C for details.
- In systems where one or more individual 30-day mortality or 30-day readmission rates were missing, BUT NOT ALL, we calculated a median value for each, by comparison
group, and substituted the median for the missing value.
** This rule was not applied to the HAI composite, which followed different exclusion logic. See Appendix C for details.

23
Identifying health systems To analyze health system performance, we
To be included in the study, a health system must aggregate data from all of a system’s included
have at least two short-term, general, acute hospitals. In the methodology summary tables in
care hospitals with separate Medicare provider this section, we provide specific details about the
identification numbers. The minimum of two calculations used for each performance measure
hospitals must be met after hospital exclusions and how these measures are aggregated to
have been applied. In addition, we also include determine system performance.
any cardiac, orthopedic, women’s hospitals, and
CAHs that passed the hospital exclusion rules After all exclusions were applied and parent
cited on the previous page. For the 2018 study, systems identified, the final 2018 study group
we identified the “parent” system by finding the included 338 health systems with the profiles
“home office” or “related organization”, as reported outlined in Table 9.
on the hospitals’ 2016 (or 2015) Medicare
cost report.

We identify health systems that have subsystems


with their own reported home offices or related
organization relationships. Both the parent system
and any identified subsystems are treated as
“health systems” for purposes of this study and are
independently profiled. Hospitals that belong to a
parent health system and a subsystem are included
in both for analysis of system performance.

Table 9. 2018 health systems study group


System category Systems Member hospitals Medicare patient Average hospitals Average discharges
discharges, FFY per system per system
2015
Winning systems 15 121 330,760 8.1 22,051
Nonwinning systems 323 2,798 9,368,880 8.7 29,006
Total systems 338 2,919 9,699,640 8.6 28,697

Note: A hospital can be a member of both a parent system and a subsystem of that parent. They will be included in both parent and subsystem member hospital counts.
The total unduplicated hospital count in this study was 2,422 hospitals.

24 IBM Watson Health


Classifying health systems into The nine measures included in the 2018 study, by
comparison groups performance domain, are:
Health system comparison groups
We refine the analysis of health systems by dividing Inpatient outcomes
them into three comparison groups based on 1. Risk-adjusted inpatient mortality index
total operating expense of the identified member
hospitals. This is done to develop more action- 2. Risk-adjusted complications index
driving benchmarks for like systems. For the 2018 3. Mean HAI index
study, the three comparison groups we used are
listed in Table 10. Extended outcomes
4. Mean 30-day risk-adjusted mortality rate
Table 10. Health system comparison groups, defined
(includes acute myocardial infarction [AMI],
Health system Total operating Number of Number of
comparison expense systems in winners
heart failure [HF], pneumonia, chronic
group study obstructive pulmonary disease [COPD],
Large > $1.85 billion 113 5 and stroke)
Medium $800 million - 116 5 5. Mean 30-day risk-adjusted readmission
$1.85 billion
rate (includes AMI, HF, pneumonia, total hip
Small < $800 million 109 5
arthroplasty and/or total knee arthroplasty
Total systems 338 15 [THA/TKA], COPD, and stroke)

Scoring health systems on weighted Operational efficiency


performance measures 6. Severity-adjusted average LOS
Evolution of performance measures
7. Mean emergency department (ED) throughput
We use a balanced scorecard approach, based on
(wait time minutes)
public data, to select the measures we believe to
be most useful for boards, CEOs, and other leaders 8. MSPB index
in the current health system operating environment.
In addition, we continually review trends in the Patient experience
healthcare market, to identify the need for, and
9. HCAHPS score (overall hospital performance)
availability of, new performance measurement
approaches. We welcome feedback from hospital
The data sources for these measures are listed in
and system executives on the usefulness of our
Table 11.
measures and our approach.

As the healthcare industry has changed, our


methods have evolved. Our current measures
are centered on four main components of system
performance: inpatient outcomes, extended
outcomes, operational efficiency, and patient
experience. Measures of financial performance are
also included for information only, as not all health
systems have publicly reported, audited financial
statements.

25
Table 11. Summary of measure data sources and data periods
Performance measure Current performance Five-year trend performance
(15 Top Health Systems award selection)
Risk-adjusted inpatient mortality index MEDPAR federal fiscal year (FFY) 2015 and 2016* MEDPAR FFY 2011 - 2016*
Risk-adjusted complications index MEDPAR FFY 2015 and 2016* MEDPAR FFY 2011 - 2016*
Mean HAI index CMS Hospital Compare Calendar Year (CY) 2016 Trend not available
Mean 30-day mortality rate CMS Hospital Compare July 1, 2013 - June 30, 2016 CMS Hospital Compare: Three-year data sets
ending June 30 in 2013, 2014, 2015, 2016
Mean 30-day readmission rate (AMI, HF, CMS Hospital Compare July 1, 2013 - June 30, 2016 CMS Hospital Compare: Three-year data sets
pneumonia, THA/TKA**, COPD, stroke) ending June 30 in 2013, 2014, 2015, 2016
Severity-adjusted ALOS MEDPAR FFY 2016 MEDPAR FFY 2012 - 2016
Mean ED throughput measure CMS Hospital Compare CY 2016 CMS Hospital Compare 2012 - 2016
MSPB index CMS Hospital Compare CY 2016 CMS Hospital Compare 2012 - 2016
HCAHPS score (overall hospital rating) CMS Hospital Compare CY 2016 CMS Hospital Compare 2012 - 2016

* Two years of data are combined for each study year data point.
** Primary, elective total hip arthroplasty and total knee arthroplasty.

Following is the rationale for the selection of our Extended outcomes


balanced scorecard domains and the measures The extended outcomes measures (30-day
used for each. mortality rates for AMI, HF, pneumonia, COPD, and
stroke patients; and 30-day readmission rates for
Inpatient outcomes AMI, HF, pneumonia, THA/TKA, COPD, and stroke
Our measures of inpatient outcomes include patients) help us understand how the system’s
three measures: risk-adjusted mortality index, patients are faring over a longer period27. These
risk-adjusted complications index, and mean HAI measures are part of the CMS Hospital Value-
index. These measures show us how the system Based Purchasing Program and are widely reported
is performing on what we consider to be the most on in the industry. Hospitals with lower values
basic and essential care standards (survival, error- appear to be providing, or coordinating the care
free care, and infection prevention) while treating continuum with better medium-term results for
patients in their hospitals. these conditions.

As systems become more interested in contracting


for population health management, we believe that
understanding outcomes beyond the walls of the
acute care setting is imperative. We are committed
to adding new metrics that assess performance
along the continuum of care as they become
publicly available.

26 IBM Watson Health


Efficiency Patient experience
The efficiency domain includes severity-adjusted We believe that a measure of patient perception
average LOS, the mean ED throughput measure, of care (the patient “experience”) is crucial to the
and the MSPB index. Average LOS serves as a balanced scorecard concept. Understanding how
proxy for clinical efficiency in an inpatient setting, patients perceive the care a hospital provides, and
while the ED throughput measures focus on how that perception compares with perceptions
process efficiency in one of the most important of patients in peer hospitals, is an important
access points to hospital care. step a hospital can take in pursuing performance
excellence. For this reason, we calculate an
Average LOS is adjusted to increase the validity of HCAHPS score, based on patient perception-of-
comparisons across the hospital industry. We use care data from the HCAHPS patient survey. In this
a Watson Health proprietary severity-adjustment study, the HCAHPS score is based on the HCAHPS
model to determine expected LOS at the patient overall hospital rating question only.
level. Patient-level observed and expected LOS
values are used to calculate the system-level, A comprehensive, balanced view
severity-adjusted, average LOS. Through the combined measures described in this
section, we hope to provide a balanced picture
For ED throughput, we use the mean of the of overall health system performance, which can
reported median minutes for two critical processes: reflect leadership’s ability to consistently improve
median time from ED arrival to ED departure for their organizations over time and sustain high
admitted patients, and median time from ED arrival performance, once achieved. Full details about
to ED departure for non-admitted patients. each of these measures are included on the
following pages.
The MSPB index is used as a proxy for continuum-
of-care cost performance. This measure, as defined
and calculated by CMS, is the ratio of Medicare
spending per beneficiary treated in a specific
hospital and the median Medicare spending
per beneficiary nationally. It includes Medicare
Part A and Part B payments three days prior to
the hospital stay, during the stay, and 30 days
post-discharge. We believe this indicator can be
a beginning point for understanding hospital and
local area cost performance relative to hospital
peer markets.

27
Performance measure details

Risk-adjusted inpatient mortality index


Why we include this element Calculation Comments Favorable
values are
Patient survival is a universally accepted We calculate a mortality index value We rank systems, by comparison group, Lower
measure of hospital quality. The based on the aggregate number of on the difference between observed
lower the mortality index, the greater actual in-hospital deaths for all member and expected deaths, expressed in
the survival of the patients in the hospitals in each system, divided by normalized standard deviation units
system’s hospitals, considering what the number of normalized expected (z-score)28, 29. Health systems with the
would be expected based on patient deaths, given the risk of death for each fewest deaths, relative to the number
characteristics. While all hospitals have patient. Expected deaths are derived by expected, after accounting for standard
patient deaths, this measure can show processing MEDPAR patient record data binomial variability, receive the most
where deaths did not occur but were through our proprietary mortality risk favorable scores. We use two years of
expected, or the reverse, given the model, which is designed to predict the MEDPAR data to reduce the influence of
patient’s condition. likelihood of a patient’s death based on chance fluctuation.
patient-level characteristics (age, sex,
presence of complicating diagnoses, and We report the system-level ratio of
other characteristics). We normalize the observed to normalized expected
expected values using the observed- (inpatient mortality index).
to-expected ratio for in-study health
systems, by comparison group. The MEDPAR data set includes both
Medicare fee-for-service claims and
The mortality risk model takes into Medicare Advantage (HMO) encounter
account POA coding in determining records.
expected deaths. Palliative care
patients (Z515/V66.7) are included Systems with observed values
in the risk model. Do not resuscitate statistically worse than expected
(DNR) patients (Z66/V49.86) coded as (99% confidence), and whose values
“present on admission” are excluded. are above the high trim point (75th
Post-discharge deaths are excluded. For percentile of statistical outliers), are
more information, see Appendix C. not eligible to be named benchmark
health systems.
The reference value for this index is
1.00; a value of 1.15 indicates 15%
more deaths occurred than were
predicted, and a value of 0.85 indicates
15% fewer deaths than predicted.

28 IBM Watson Health


Risk-adjusted complications index
Why we include this element Calculation Comments Favorable
values are
Keeping patients free from potentially We calculate a complications index We rank systems on the difference Lower
avoidable complications is an important value based on the aggregate number between the observed and expected
goal for all healthcare providers. A lower of cases with observed complications number of patients with complications,
complications index indicates fewer for all member hospitals in each system, expressed in normalized standard
patients with complications, considering divided by the number of normalized deviation units (z-score). We use two
what would be expected based on expected complications, given the years of MEDPAR data to reduce the
patient characteristics. Like the mortality risk of complications for each patient. influence of chance fluctuation.
index, this measure can show where Expected complications are derived by
complications did not occur but were processing MEDPAR patient record data We report the system-level ratio of
expected, or the reverse, given the through our proprietary complications observed to normalized expected
patient’s condition. risk model, which is designed to predict (inpatient mortality index).
the likelihood of complications during
hospitalization. This model accounts for The MEDPAR data set includes both
patient-level characteristics (age, sex, Medicare fee-for-service claims and
principal diagnosis, comorbid conditions, Medicare Advantage (HMO) encounter
and other characteristics). We normalize records.
the expected values using the observed-
to-expected ratio for in-study health Systems with observed values
systems, by comparison group. statistically worse than expected
Complications rates are calculated (99% confidence), and whose values
from normative data for two patient risk are above the high trim point (75th
groups: medical and surgical. percentile of statistical outliers), are
not eligible to be named benchmark
POA coding is used in the risk model health systems.
to identify pre-existing conditions for
accurate assessment of patient severity
and to distinguish from complications
occurring during hospitalization. For
more details, see Appendix C.

The reference value for this index is


1.00; a value of 1.15 indicates 15%
more complications occurred than
were predicted, and a value of 0.85
indicates 15% fewer complications
than predicted.

29
Mean HAI index
Why we include this element Calculation Comments Favorable
values are
Because there is a public interest in Measure data was obtained from We rank systems on the mean Lower
tracking and preventing HAIs, we the CMS Hospital Compare data set. normalized HAI z-score, by
now use the HAI data reported by Hospitals complete the required comparison group.
CMS to analyze hospital performance surveillance and report HAI occurrences,
and provide national benchmarks in and the count of patient days or For reporting, we also calculate a
this area. procedures associated with each system-level observed-to-expected
HAI metric, through the US Centers ratio (not normalized) for each HAI. We
for Disease Control and Prevention’s then calculate the unweighted mean
National Healthcare Safety Network of the observed-to-expected values for
(NHSN), which in turn reports data the HAIs included in each comparison
to CMS. group, as the reported HAI measure.

To calculate a standardized infection See Appendix C for details on HAIs


ratio (SIR) for reporting HAI incidence, included in each comparison group.
expected values are developed by
the NHSN using probability models The CMS Hospital Compare HAI data set
constructed from NHSN baseline data, includes hospital-reported HAIs for all
which represents a standard population. inpatients.
We normalize each expected value
based on the observed and expected
HAIs for each comparison group.

We use the system-level observed,


normalized expected values and
associated days or procedures to
calculate a normalized z-score for each
HAI metric. For each comparison group,
the composite HAI measure is the
mean of the individual HAI normalized
z-scores included for that group.

Mean 30-day risk-adjusted mortality rate (AMI, HF, pneumonia, COPD and stroke patients)
Why we include this element Calculation Comments Favorable
values are
30-day mortality rates are a widely Data is from the CMS Hospital Compare We rank systems by comparison group, Lower
accepted measure of the effectiveness data set. CMS calculates a 30-day based on the mean rate for included
of hospital care. They allow us to look mortality rate for each patient condition 30-day mortality measures (AMI, HF,
beyond immediate inpatient outcomes using three years of MEDPAR data, pneumonia, COPD, and stroke).
and understand how the care the health combined. We aggregate this data to
system provided to inpatients with produce a rate for each 30-day measure The CMS Hospital Compare data for
these particular conditions may have for each system. This is done by 30-day mortality is based on Medicare
contributed to their longer-term survival. multiplying the hospital-level reported fee-for-service claims only.
In addition, tracking these measures patient count (eligible patients) by the
may help health systems identify reported hospital rate to determine the
patients at risk for post-discharge number of patients who died within
problems and target improvements 30 days of admission. We sum the
in discharge planning and after-care calculated deaths and divide by the sum
processes. Health systems that score of eligible patients for member hospitals
well may be better prepared for a pay- of each system. This value is multiplied
for-performance structure. by 100 to produce the system-level
30-day mortality rate for each measure,
expressed as a percent. CMS does not
calculate rates for hospitals where the
number of cases is too small (less than
25). In these cases, we substitute
the comparison group-specific
median rate for the affected 30-day
mortality measure.

We calculate the arithmetic mean of the


system-level included 30-day mortality
rates (AMI, HF, pneumonia, COPD, and
stroke) to produce the ranked composite
measure.

30 IBM Watson Health


Mean 30-day risk-adjusted readmission rate (AMI, HF, pneumonia, THA/TKA, COPD and stroke patients)
Why we include this element Calculation Comments Favorable
values are
30-day readmission rates are a Data is from the CMS Hospital Compare We rank systems by comparison group, Lower
widely accepted measure of the data set. CMS calculates a 30-day based on the mean rate for included
effectiveness of hospital care. They readmission rate for each patient 30-day readmission measures (AMI,
allow us to understand how the care condition using three years of MEDPAR HF, pneumonia, THA/TKA, COPD,
the hospital provided to inpatients data, combined. We aggregate this and stroke).
with these particular conditions may data to produce a rate for each 30-
have contributed to issues with their day measure for each system. This The CMS Hospital Compare data for
post-discharge medical stability and is done by multiplying the hospital- 30-day readmissions is based on
recovery. Because these measures are level reported patient count (eligible Medicare fee-for-service claims only.
part of the CMS Value-Based Purchasing patients) by the reported hospital rate
Program, they are now being watched in to determine the number of patients
the industry. Tracking these measures who were readmitted within 30 days
may help hospitals identify patients of discharge. We sum the calculated
at risk for post-discharge problems if readmissions and divide by the sum of
discharged too soon, as well as target eligible patients for member hospitals of
improvements in discharge planning each system. This value is multiplied by
and after-care processes. Hospitals that 100 to produce the system-level 30-day
score well may be better prepared for a readmission rate for each measure,
pay-for-performance structure. expressed as a percent. CMS does not
calculate rates for hospitals where the
number of cases is too small (less than
25). In these cases, we substitute the
comparison group-specific median
rate for the affected 30-day
readmission measure.

We calculate the arithmetic mean


of the system-level included 30-day
readmission rates (AMI, HF, pneumonia,
THA/TKA, COPD, and stroke) to produce
the ranked composite measure.

Severity-adjusted average LOS


Why we include this element Calculation Comments Favorable
values are
A lower severity-adjusted average We calculate an LOS index value for each We rank systems on their severity- Lower
LOS generally indicates more efficient health system by dividing the sum of the adjusted average LOS.
consumption of hospital resources and actual LOS of member hospitals by the
reduced risk to patients. sum of the normalized expected LOS for The MEDPAR data set includes
the hospitals in the system. Expected both Medicare fee-for-service claims
LOS adjusts for difference in severity of and Medicare Advantage (HMO)
illness using a linear regression model. encounter records.
We normalize the expected values using
the observed-to-expected ratio for
in-study health systems, by comparison
group. The LOS risk model takes into
account POA coding in determining
expected length of stay.

We convert the LOS index into days by


multiplying each system’s LOS index
by the grand mean LOS for all in-study
health systems. We calculate grand
mean LOS by summing in-study health
systems’ LOS and dividing that by the
number of health systems.

31
Mean ED throughput measure
Why we include this element Calculation Comments Favorable
values are
The hospital ED is an important access Data is from the CMS Hospital Compare We rank systems on the mean ED Lower
point to healthcare for many people. A data set. CMS reports the median throughput measure in minutes.
key factor in evaluating ED performance minutes for each ED throughput
is process “throughput,” or measures measure. We include two of the We include two measures that define
of the timeliness with which patients available ED measures in calculating an important ED processes: median time
receive treatment, and either are unweighted system aggregate measure. from ED arrival to ED departure for
admitted or discharged. Timely ED For each ED measure, we sum the admitted patients, and median time
processes impact both care quality and median minutes for system member from ED arrival to ED departure for non-
the quality of the patient experience. hospitals and divide by the number admitted patients.
of member hospitals to produce the
system-level minutes for each measure.
We calculate the arithmetic mean of the
two included ED measures to produce
the ranked composite ED measure.

MSPB index
Why we include this element Calculation Comments Favorable
values are
MSPB helps determine how efficiently Data is from the CMS Hospital Compare We rank systems on the weighted Lower
a hospital coordinates the care for its data set. CMS calculates the cost of care average MSPB index.
patients across continuum-of-care for each admitted patient, including
sites. Lower values indicate lower costs Medicare Part A and Part B costs. CMS calculates the cost of care for
relative to national medians and thus CMS aggregates costs associated with each admitted patient, including both
greater efficiency. the index admission from three days Medicare Part A and Part B costs.
preadmission, through inpatient stay,
and 30 days post-admission. This
cost is divided by the median national
cost. CMS applies both numerator and
denominator adjustments. We calculate
the system-level measure by weighting
each member hospital index by the
hospital’s MEDPAR discharges for the
most current year in the study. We sum
the weighted values and divide by the
sum of the MEDPAR discharges of all
member hospitals. This produces a
weighted average MSPB index for
each system.

An index value above 1.0 means


higher-than-national median cost per
beneficiary. An index value below 1.0
means lower-than-national median cost
per beneficiary.

32 IBM Watson Health


HCAHPS score (overall hospital performance)
Why we include this element Calculation Comments Favorable
values are
We believe that including a measure of Data is from the CMS Hospital Compare We rank systems based on the weighted Higher
patient assessment/perception of care data set. We used the data published average HCAHPS score. The highest
is crucial to the balanced scorecard by CMS for the HCAHPS survey possible HCAHPS score is 300 (100% of
concept. How patients perceive the care instrument question, “How do patients patients rate the health system hospitals
a hospital provides likely has a direct rate the hospital overall?” to score high). The lowest HCAHPS score is 100
effect on its ability to remain competitive hospitals. Patient responses fall into (100% of patients rate the hospitals low).
in the marketplace. three categories, and the number of
patients in each category is reported as HCAHPS data is survey data, based on
a percentage by CMS. either a sample of hospital inpatients or
all inpatients. The data set contains the
–– Patients who gave a rating of 6 or
question scoring of survey respondents.
lower (low)
–– Patients who gave a rating of 7 or 8
(medium)
–– Patients who gave a rating of 9 or 10
(high)

For each answer category, we assign


a weight as follows: 3 equals high or
good performance, 2 equals medium or
average performance, and 1 equals low
or poor performance. We then calculate
a weighted score for each hospital
by multiplying the HCAHPS answer
percent by the category weight. For each
hospital, we sum the weighted percent
values for the three answer categories.
The result is the hospital HCAHPS score.

To calculate the aggregate system


score, we multiply each member
hospital HCAHPS score by their MEDPAR
discharges, sum the weighted scores,
and divide by the sum of the member
hospital discharges. This produces a
weighted average HCAHPS score for
each system.

33
Determining the 15 Top Health Systems Winner exclusions
Ranking For mortality and complications, which have
We rank health systems based on their observed and expected values, we identify systems
performance on each of the included measures with performance that is statistically worse than
relative to the other in-study systems, by expected. Systems with performance that is worse
comparison group. We sum the ranks, giving all than expected are excluded from consideration
measures equal weight, and re-rank overall to when selecting the study winners. This is done
arrive at a final rank for the system. The top five because we do not want systems that have poor
health systems with the best final rank in each clinical outcomes to be declared winners.
of the three comparison groups are selected
as the winners (15 total winners). The ranked A system is winner-excluded if both of the following
performance measures are listed in Table 12. conditions apply:
1. Observed value is higher than expected and
Table 12. Ranked performance measures and weights the difference is statistically significant, with
Ranked measure Weight in 99% confidence.
overall ranking
2. We calculate the 75th percentile index value
Risk-adjusted inpatient mortality 1
for mortality and complications, including
Risk-adjusted complications 1
data only for systems that meet condition
Mean HAI index 1
number 1 above. These values are used as
Mean 30-day mortality rate 1
the high trim points for those health systems.
Mean 30-day readmission rate 1 Systems with mortality or complications
Severity-adjusted average LOS 1 index values above the respective trim points
Mean ED throughput 1 are winner-excluded.
MSPB index 1
HCAHPS score (overall rating question) 1 If MSPB is missing, the system is winner excluded.

34 IBM Watson Health


Measure of “systemness”: The system performance-weighted alignment
The performance-weighted alignment score scores are ranked by comparison group and
For several years, we have reported a performance- reported as rank percentiles. Higher percentiles
weighted alignment score that measures whether a mean better performance.
system is consistently delivering top performance
in each community served. It can bring focus The profiled system performance is compared to
to leadership goal-setting and contribute to the the median alignment scores for the systems that
development of a system brand that represents were in the top quintile on both performance and
reliable delivery of high value across all system sites. improvement (top performance and improvement
group). This group is selected using the study
Methodology ranked metrics, not member hospital alignment.
Each system performance-weighted alignment We find that high alignment has not yet been
score is the average of the distance of each achieved uniformly across all measures, even in
member hospital from their central point (centroid this high-performing group.
[measure of alignment]) and the distance of each
of those hospitals from the 100th percentile point
(perfect point [measure of performance]), weighted
by the distance from the perfect point. A score is
calculated for overall performance and for each
individual measure.

100
2 Perfect point

6
80
7
2012 - 2016 rate of improvement

System member
1 hospital key
Centroid
60 Hospital 1
Hospital 2
4 5
Hospital 3
Hospital 4
Hospital 5
40 Hospital 6
3 Hospital 7

20

0
0 20 40 60 80 100

2016 performance

35
Policy on revocation of a
15 Top Health Systems award
To preserve the integrity of the study, it is the
policy of the Watson Health 100 Top Hospitals
program to revoke a 15 Top Health Systems award
if a system is found to have submitted inaccurate
or misleading data to any data source used in
the study.

At the discretion of the 100 Top Hospitals program,


the circumstances under which a 15 Top Health
Systems award could be revoked include, but are
not limited to, the following:
–– Discovery by Watson Health staff, through
statistical analysis or other means, that a
health system has submitted inaccurate data
–– Discovery of media or internet reports
of governmental or accrediting agency
investigations, or sanctions for actions by a
health system that could have an adverse
impact on the integrity of the 15 Top Health
Systems studies or award winner selection
Appendix A:
Health system
winners and their
member hospitals
Health system/hospital name Location Hospital Medicare ID
Asante Medford, OR
Asante Ashland Community Hospital Ashland, OR 380005
Asante Rogue Regional Medical Center Medford, OR 380018
Asante Three Rivers Medical Center Grants Pass, OR 380002
Aspirus Wausau, WI
Aspirus Iron River Hospital & Clinics Iron River, MI 231318
Aspirus Ironwood Hospital Ironwood, MI 231333
Aspirus Keweenaw Hospital Laurium, MI 231319
Aspirus Langlade Hospital Antigo, WI 521350
Aspirus Medford Hospital Medford, WI 521324
Aspirus Ontonagon Hospital Ontonagon, MI 231309
Aspirus Riverview Hospital & Clinics Wisconsin Rapids, WI 520033
Aspirus Wausau Hospital Wausau, WI 520030
CHI St. Joseph Health Bryan,TX
Bellville General Hospital Bellville, TX 450253
Burleson St. Joseph Health Center Caldwell, TX 451305
Grimes St. Joseph Health Center Navasota, TX 451322
Madison St. Joseph Health Center Madisonville, TX 451316
St. Joseph Regional Health Center Bryan, TX 450011
HealthPartners Bloomington, MN
Lakeview Hospital Stillwater, MN 240066
Park Nicollet Methodist Hospital St. Louis Park, MN 240053
Regions Hospital Saint Paul, MN 240106
Amery Hospital Amery, WI 521308
Hudson Hospital Hudson, WI 521335
Westfields Hospital and Clinic New Richmond, WI 521345
Maury Regional Health Columbia, TN
Marshall Medical Center Lewisburg, TN 441309
Maury Regional Hospital Columbia, TN 440073
Wayne Medical Center Waynesboro, TN 440010

Note: Winning systems are listed alphabetically by name. Member hospitals are listed alphabetically by state, then alphabetically by name.

37
Health system/hospital name Location Hospital Medicare ID
Mayo Foundation Rochester, MN
Mayo Clinic Hospital Phoenix, AZ 030103
Mayo Clinic Hospital Jacksonville, FL 100151
Mayo Clinic Health System in Waycross Waycross, GA 110003
Mayo Clinic Hospital Rochester Rochester, MN 240010
Mayo Clinic Health System Albert Lea Albert Lea, MN 240043
Mayo Clinic Health System Cannon Falls Cannon Falls, MN 241346
Mayo Clinic Health System Fairmont Fairmont, MN 240166
Mayo Clinic Health System in Red Wing Red Wing, MN 240018
Mayo Clinic Health System Lake City Lake City, MN 241338
Mayo Clinic Health System Mankato Mankato, MN 240093
Mayo Clinic Health System New Prague New Prague, MN 241361
Mayo Clinic Health System Springfield Springfield, MN 241352
Mayo Clinic Health System St. James St. James, MN 241333
Mayo Clinic Health System Waseca Waseca, MN 241345
Mayo Clinic Franciscan Healthcare La Crosse, WI 520004
Mayo Clinic Health System Franciscan Sparta, WI 521305
Mayo Clinic Health System Chippewa Valley Bloomer, WI 521314
Mayo Clinic Health System Eau Claire Eau Claire, WI 520070
Mayo Clinic Health System Northland Barron, WI 521315
Mayo Clinic Health System Oakridge Osseo, WI 521302
Mayo Clinic Health System Red Cedar Menomonie, WI 521340
Mercy Chesterfield, MO
Mercy Hospital Berryville Berryville, AR 041329
Mercy Hospital Booneville Booneville, AR 041318
Mercy Hospital Fort Smith Fort Smith, AR 040062
Mercy Hospital Ozark Ozark, AR 041303
Mercy Hospital Rogers Rogers, AR 040010
Mercy Hospital Waldron Waldron, AR 041305
North Logan Mercy Hospital Paris, AR 041300
Mercy Hospital Columbus Columbus, KS 171308
Mercy Hospital Fort Scott Fort Scott, KS 170058
Mercy Hospital Aurora Aurora, MO 261316
Mercy Hospital Carthage Carthage, MO 261338
Mercy Hospital Cassville Cassville, MO 261317
Mercy Hospital Jefferson Festus, MO 260023

Note: Winning systems are listed alphabetically by name. Member hospitals are listed alphabetically by state, then alphabetically by name.

38 IBM Watson Health


Health system/hospital name Location Hospital Medicare ID
Mercy Hospital Joplin Joplin, MO 260001
Mercy Hospital Lebanon Lebanon, MO 260059
Mercy Hospital Lincoln Troy, MO 261319
Mercy Hospital Springfield Springfield, MO 260065
Mercy Hospital St. Louis St. Louis, MO 260020
Mercy Hospital Washington Washington, MO 260052
Mercy St. Francis Hospital Mountain View, MO 261335
Mercy Health Love County Marietta, OK 371306
Mercy Hospital Ada Ada, OK 370020
Mercy Hospital Ardmore Ardmore, OK 370047
Mercy Hospital El Reno El Reno, OK 370011
Mercy Hospital Healdton Healdton, OK 371310
Mercy Hospital Kingfisher Kingfisher, OK 371313
Mercy Hospital Logan County Guthrie, OK 371317
Mercy Hospital Oklahoma City Oklahoma City, OK 370013
Mercy Hospital Tishomingo Tishomingo, OK 371304
Mercy Hospital Watonga Watonga, OK 371302
Oklahoma Heart Hospital Oklahoma City, OK 370215
Oklahoma Heart Hospital South Campus Oklahoma City, OK 370234
Mercy Health - Cincinnati Cincinnati, OH
Mercy Health - Clermont Hospital Batavia, OH 360236
Mercy Health West Hospital Cincinnati, OH 360234
Mercy Hospital Anderson Cincinnati, OH 360001
Mercy Hospital Fairfield Fairfield, OH 360056
The Jewish Hospital - Mercy Health Cincinnati, OH 360016
Mission Health Asheville, NC
Angel Medical Center Franklin, NC 341326
Blue Ridge Regional Hospital Spruce Pine, NC 341329
Highlands Cashiers Hospital Highlands, NC 341316
McDowell Hospital Marion, NC 340087
Mission Hospital Asheville, NC 340002
Transylvania Regional Hospital Brevard, NC 341319
Roper St. Francis Healthcare Charleston, SC
Bon Secours St. Francis Hospital Charleston, SC 420065
Mount Pleasant Hospital Mount Pleasant, SC 420104
Roper Hospital Charleston, SC 420087

Note: Winning systems are listed alphabetically by name. Member hospitals are listed alphabetically by state, then alphabetically by name.

39
Health system/hospital name Location Hospital Medicare ID
Sentara Healthcare Norfolk, VA
Sentara Albemarle Medical Center Elizabeth City, NC 340109
Martha Jefferson Hospital Charlottesville, VA 490077
Sentara Careplex Hospital Hampton, VA 490093
Sentara Halifax Regional Hospital South Boston, VA 490013
Sentara Leigh Hospital Norfolk, VA 490046
Sentara Norfolk General Hospital Norfolk, VA 490007
Sentara Northern Virginia Medical Center Woodbridge, VA 490113
Sentara Obici Hospital Suffolk, VA 490044
Sentara Princess Anne Hospital Virginia Beach, VA 490119
Sentara RMH Medical Center Harrisonburg, VA 490004
Sentara Virginia Beach General Hospital Virginia Beach, VA 490057
Sentara Williamsburg Regional Medical Center Williamsburg, VA 490066
St. Luke's Health System Boise, ID
St. Luke's Boise Medical Center Boise, ID 130006
St. Luke's Elmore Medical Center Mountain Home, ID 131311
St. Luke's Jerome Jerome, ID 131310
St. Luke's Magic Valley RMC Twin Falls, ID 130002
St. Luke's McCall McCall, ID 131312
St. Luke's Wood River Medical Center Ketchum, ID 131323
UPMC Susquehanna Health System Williamsport, PA
Muncy Valley Hospital Muncy, PA 391301
Soldiers and Sailors Memorial Hospital Wellsboro, PA 390043
Williamsport Regional Medical Center Williamsport, PA 390045
TriHealth Cincinnati, OH
Bethesda North Hospital Cincinnati, OH 360179
Good Samaritan Hospital Cincinnati, OH 360134
McCullough Hyde Memorial Hospital Oxford, OH 360046
TriHealth Evendale Hospital Cincinnati, OH 360362
UCHealth Aurora, CO
UCHealth Medical Center of the Rockies Loveland, CO 060119
UCHealth Poudre Valley Hospital Fort Collins, CO 060010
UCHealth University of Colorado Hospital Aurora, CO 060024
University of Colorado Health Memorial Hospital Colorado Springs, CO 060022

Note: Winning systems are listed alphabetically by name. Member hospitals are listed alphabetically by state, then alphabetically by name.

40 IBM Watson Health


Appendix B:
Medium health systems
System name Location

The top quintile:


Alegent Creighton Health Omaha, NE
Aspirus Wausau, WI

Highest-performing Bronson Healthcare Group


Edward Elmhurst Health
Kalamazoo, MI
Naperville, IL

health systems Essentia Health


Froedtert & the Medical College of Wisconsin
Duluth, MN
Milwaukee, WI
HCA Mountain Division Salt Lake City, UT
Large health systems
HealthPartners Bloomington, MN
System name Location
HonorHealth Scottsdale, AZ
Allina Health System Minneapolis, MN
Kettering Health Network Dayton, OH
Ardent Health Services Nashville, TN
Main Line Health Bryn Mawr, PA
Avera Health Sioux Falls, SD
Mercy Health - Cincinnati Cincinnati, OH
Centura Health Englewood, CO
Ministry Health Care Milwaukee, WI
Cleveland Clinic Cleveland, OH
Mission Health Asheville, NC
Franciscan Health Mishawaka, IN
Mountain States Health Alliance Johnson City, TN
Hospital Sisters Health System Springfield, IL
Munson Healthcare Traverse City, MI
Indiana University Health Indianapolis, IN
Parkview Health Fort Wayne, IN
Mayo Foundation Rochester, MN
Premier Health Dayton, OH
Memorial Healthcare System Hollywood, FL
Presbyterian Healthcare Services Albuquerque, NM
Memorial Hermann Health System Houston, TX
ProMedica Health System Toledo, OH
Mercy Chesterfield, MO
Saint Joseph Mercy Health System Ann Arbor, MI
Mercy Health (OH) Cincinnati, OH
Saint Luke’s Health System Kansas City, MO
Northwestern Medicine Chicago, IL
SCL Denver Region Denver, CO
Prime Healthcare Services Ontario, CA
TriHealth Cincinnati, OH
SCL Health Denver, CO
Sentara Healthcarecare Norfolk, VA
Spectrum Health Grand Rapids, MI
St. Joseph Health System Irvine, CA
St. Luke's Health System Boise, ID
Sutter Health Valley Area Sacramento, CA
UCHealth Aurora, CO
University Hospitals Health System Cleveland, OH

Note: Health systems are ordered alphabetically. This year’s 15 Top Health Systems (2018) are in bold, blue text.

41
Small health systems
System name Location
Asante Medford, OR
Baptist Health Care (FL) Pensacola, FL
Cape Cod Healthcare Hyannis, MA
Centegra Health System Crystal Lake, IL
Centra Health Lynchburg, VA
CHI St. Joseph Health Bryan, TX
Genesis Health System Davenport, IA
Guthrie Healthcare System Sayre, PA
John D. Archbold Memorial Hospital Thomasville, GA
Mary Washington Healthcare Fredericksburg, VA
Maury Regional Health Columbia, TN
MidMichigan Health Midland, MI
Northern Arizona Healthcare Flagstaff, AZ
Penn Highlands Healthcare DuBois, PA
PIH Health Whittier, CA
ProHealth Care Waukesha, WI
Roper St. Francis Healthcare Charleston, SC
Saint Alphonsus Health System Boise, ID
Saint Joseph Regional Health System Mishawaka, IN
St. Charles Health System Bend, OR
St. Mary's Health Care System Athens, GA
UPMC Susquehanna Health System Williamsport, PA

Note: Health systems are ordered alphabetically. This year’s 15 Top Health Systems (2018) are in bold, blue text.

42 IBM Watson Health


Appendix C: Normative database development
Watson Health constructed a normative database

Methodology details of case-level data from its Projected Inpatient


Database (PIDB), a national all-payer database
containing more than 23 million all-payer
IBM Watson Health™ makes normative discharges annually. This data is obtained from
comparisons of mortality and complications approximately 5,000 hospitals, representing over
rates by using patient-level data to control for 65% of all discharges from short-term, general,
case mix and severity differences. We do this by nonfederal hospitals in the US. PIDB discharges are
evaluating ICD-9-CM diagnosis and procedure statistically weighted to represent the universe of
codes to adjust for severity within clinical case mix short-term, general, nonfederal hospitals in the US.
groupings. Conceptually, we group patients with Demographic and clinical data are also included:
similar characteristics (that is, age, sex, principal age, sex, and LOS; clinical groupings (Medicare
diagnosis, procedures performed, admission type, Severity Diagnosis Related Groups, or MS-DRGs),
and comorbid conditions) to produce expected, or ICD-9-CM and ICD-10-CM principal and secondary
normative, comparisons. Through testing, we have diagnoses and procedures; present-on-admission
found that this methodology produces normative (POA) coding; admission source and type; and
comparisons using readily available administrative discharge status. For this study, risk models were
data, eliminating the need for additional data recalibrated using federal fiscal year (FFY) 2015
collection30 - 34. all-payer data.

To support the transition from ICD-9-CM to


ICD-10-CM, our risk- and severity-adjustment Use of present-on-admission data
models have been modified to use the Agency for
Healthcare Research and Quality (AHRQ) Clinical Under the Deficit Reduction Act of 2005, as of
Classifications Software (CCS)35 categories for risk FFY 2008, hospitals receive reduced payments for
assignment. CCS categories are defined in both cases with certain conditions, such as falls, surgical
coding languages with the intent of being able to site infections, and pressure ulcers, which were
accurately compare ICD-9 categories with ICD- not present at the time of the patient’s admission
10 categories. Calibrating our models using CCS but occurred during hospitalization. The Centers for
categories provides the flexibility to accept and Medicare & Medicaid Services (CMS) now requires
process patient record data in either ICD-9 or ICD- all Inpatient Prospective Payment System (IPPS)
10 coding formats and produces consistent results hospitals to document whether a patient has these
in risk and severity adjustment. and other conditions when admitted. The Watson
Health proprietary risk- and severity-adjustment
The CCS-based approach applies to all 100 Top models for inpatient mortality, complications, and
Hospitals program proprietary models that use LOS use POA data reported in the all-payer data to
code-based rate tables, which include the Risk- identify conditions that were present on admission
Adjustment Mortality Index, Expected Complication and distinguish them from complications that
Risk Index, and Expected Resource Demand, occurred while the patient was in the hospital. Our
Length of Stay (LOS) models used in this study. models develop expected values based only on
conditions that were present on admission.

43
In addition to considering the POA indicator codes To correct for this bias, we adjusted MEDPAR record
in calibration of our risk- and severity-adjustment processing through our mortality, complications,
models, we have adjusted for missing/invalid POA and LOS models as follows
coding found in the Medicare Provider Analysis
1. Original, valid (Y, N, U, W, or 1) POA codes
and Review (MEDPAR) data files. After 2010, we
assigned to diagnoses were retained
have observed a significantly higher percentage of
principal diagnosis and secondary diagnosis codes 2. Where a POA code of “0” appeared, we took
that do not have a valid POA indicator code in the the next four steps:
MEDPAR data files. Since 2011, an invalid code of
a. We treated all diagnosis codes on the
“0” has been appearing. This phenomenon has led
CMS exempt list as “exempt,” regardless of
to an artificial rise in the number of conditions that
POA coding
appear to be occurring during the hospital stay, as
invalid POA codes are treated as “not present” by b. We treated all principal diagnoses as
POA-enabled risk models. “present on admission”
c. We treated secondary diagnoses where the
POA code “Y” or “W” appeared more than
50% of the time in the Watson Health all-
payer database, as “present on admission”
d. All others were treated as “not present”

Percentage of diagnosis codes with POA indicator code of “0” by MEDPAR year
2010 2011 2012 2013 2014 2015 2016
Principal diagnosis 0.00% 4.26% 4.68% 4.37% 3.40% 4.99% 2.45%
Secondary diagnosis 0.00% 15.05% 19.74% 22.10% 21.58% 23.36% 21.64%

44 IBM Watson Health


Methods for identifying patient severity Hospice versus palliative
Without adjusting for differences in patient severity, care patients
comparing outcomes among hospitals does not Separately licensed hospice unit
present an accurate picture of performance. To patient records are not included in
make normative comparisons of hospital outcomes, MEDPAR data. They have a separate
we must adjust raw data to accommodate billing type and separate provider
differences that result from the variety and severity numbers. In addition, patients
receiving hospice treatment in acute
of admitted cases.
care beds are billed under hospice,
not the hospital, and would not be in
Risk-adjusted inpatient mortality index models
the MEDPAR data file.
Watson Health has developed an inpatient
mortality risk model that can be applied to coded Inpatients coded as palliative care
patient claims data to estimate the expected (Z515) (V66.7) are included in the
probability of death occurring, given various study. Over the past few years, the
patient-related factors. The mortality risk model number of patients coded as palliative
used in this study is calibrated for patients age 65 care has increased significantly, and
and older. Additionally, in response to the transition our risk models have been calibrated
to ICD-10-CM, diagnosis and procedure codes (and to produce expected values for
the interactions among them) have been mapped these patients.
to the AHRQ CCS for assignment of risk instead
of using the individual diagnosis, procedure, and
interaction effects.

We exclude long-term care, psychiatric, substance


abuse, rehabilitation, and federally owned or
controlled facilities. In addition, we exclude certain
patient records from the data set: psychiatric;
substance abuse; unclassified cases (MS-DRGs
945, 946, and 999); cases in which patient age was
less than 65 years; and cases in which a patient
transferred to another short-term, acute care
hospital. Palliative care patients (Z515; V66.7)
are included in the mortality risk model, which is
calibrated to estimate probability of death for these
patients. The Watson Health mortality risk model
excludes records with “do not resuscitate” (DNR)
(Z66; V49.86) orders that are coded as present
on admission.

Note: We are no longer able to exclude all


rehabilitation patients as we have done in the past.
This is because the ICD-10-CM coding system
does not identify rehabilitation patients. We can
only exclude those patients coded as being in
a Prospective Payment System (PPS)-exempt
hospital rehabilitation unit (provtype = R or T).

45
Excluding records that are DNR status at admission Staff physicians at Watson Health suggested
is supported by the literature. A recent peer- clinical patient characteristics that were
reviewed publication stated: “Inclusion of DNR incorporated into the proprietary models. After
patients within mortality studies likely skews those assigning the predicted probability of the outcome
analyses, falsely indicating failed resuscitative for each patient, the patient-level data can then be
efforts rather than humane decisions to limit care aggregated across a variety of groupings, including
after injury”36. health system, hospital, service line, or MS-DRG
classification.
Our rationale is straightforward: If a patient is
admitted DNR (POA), then typically no heroic Expected complications rate index models
efforts would be made to save that patient if they Watson Health has developed a complications
began to fail. Without the POA DNR exclusion, if a risk model that can be applied to coded patient
given hospital has a higher proportion of POA DNR claims data to estimate the expected probability
patients that it is not attempting to save from death of a complication occurring, given various
compared to an otherwise similar hospital that is patient-related factors. We exclude long-term
not admitting as high a proportion of such patients, care, psychiatric, substance abuse, rehabilitation,
the first hospital would look lower-performing and federally owned or controlled facilities. In
compared to the second through no fault of its own. addition, we exclude certain patient records
The difference would be driven by the proportion of from the data set: psychiatric; substance abuse;
POA DNR patients. unclassified cases (MS-DRGs 945, 946, and
999); cases in which patient age was less than 65
A standard logistic regression model is used to years; and cases in which a patient transferred to
estimate the risk of mortality for each patient. This another short-term, acute care hospital. Palliative
is done by weighting the patient records of the care patients (Z515; V66.7) are included in the
hospital by the logistic regression coefficients complications risk model, which is calibrated
associated with the corresponding terms in the to estimate probability of complications for
model and the intercept term. This produces these patients.
the expected probability of an outcome for
each eligible patient (numerator) based on the Note: We are no longer able to exclude all
experience of the norm for patients with similar rehabilitation patients as we have done in the past.
characteristics (for example, age, clinical grouping, This is because the ICD-10-CM coding system does
and severity of illness)30 - 34. This model accounts not identify rehabilitation patients. We can only
for only patient conditions that are present on exclude those patients coded as being in a PPS-
admission when calculating risk. Additionally, in exempt hospital rehabilitation unit (provtype =
response to the transition to ICD-10-CM, diagnosis R or T).
and procedure codes, and the interactions among
them, have been mapped to AHRQ CCS categories Risk-adjusted complications refer to outcomes that
for assignment of risk instead of using the may be of concern when they occur at a greater-
individual diagnosis, procedure, and interaction than-expected rate among groups of patients,
effects. See discussion under the methods for possibly reflecting systemic quality-of-care
identifying patient severity above. issues. The Watson Health complications model
uses clinical qualifiers to identify complications
that have occurred in the inpatient setting. The
complications used in the model are listed on the
following page.

46 IBM Watson Health


Complication Patient group
Postoperative complications relating to urinary tract Surgical only
Postoperative complications relating to respiratory system except pneumonia Surgical only
Gastrointestinal complications following procedure Surgical only
Infection following injection/infusion All patients
Decubitus ulcer All patients
Postoperative septicemia, abscess, and wound infection Surgical, including cardiac
Aspiration pneumonia Surgical only
Tracheostomy complications All patients
Complications of cardiac, vascular, and hemodialysis devices Surgical, including cardiac
Nervous system complications from devices/complications of nervous system devices Surgical only
Complications of genitourinary devices Surgical only
Complications of orthopedic devices Surgical only
Complications of other and unspecified devices, implants, and grafts Surgical only
Other surgical complications Surgical, including cardiac
Miscellaneous complications All patients
Cardio-respiratory arrest, shock, or failure Surgical only
Postoperative complications relating to nervous system Surgical only
Postoperative acute myocardial infarction (AMI) Surgical only
Postoperative cardiac abnormalities except AMI Surgical only
Procedure-related perforation or laceration All patients
Postoperative physiologic and metabolic derangements Surgical, including cardiac
Postoperative coma or stupor Surgical, including cardiac
Postoperative pneumonia Surgical, including cardiac
Pulmonary embolism All patients
Venous thrombosis All patients
Hemorrhage, hematoma, or seroma complicating a procedure All patients
Postprocedure complications of other body systems All patients
Complications of transplanted organ (excludes skin and cornea) Surgical only
Disruption of operative wound Surgical only
Complications relating to anesthetic agents and central nervous system depressants Surgical, including cardiac
Complications relating to antibiotics All patients
Complications relating to other anti-infective drugs All patients
Complications relating to antineoplastic and immunosuppressive drugs All patients
Complications relating to anticoagulants and drugs affecting clotting factors All patients
Complications relating to narcotics and related analgesics All patients
Complications relating to non-narcotic analgesics All patients
Complications relating to anticonvulsants and antiparkinsonism drugs All patients
Complications relating to sedatives and hypnotics All patients
Complications relating to psychotropic agents All patients
Complications relating to CNS stimulants and drugs affecting the autonomic nervous system All patients
Complications relating to drugs affecting cardiac rhythm regulation All patients
Complications relating to cardiotonic glycosides (digoxin) and drugs of similar action All patients
Complications relating to other drugs affecting the cardiovascular system All patients
Complications relating to antiasthmatic drugs All patients
Complications relating to other medications (includes hormones, insulin, iron, and oxytocic agents) All patients

47
A standard regression model is used to estimate The index is the number of observed events divided
the risk of experiencing a complication for by the number of expected events and can be
each patient. This is done by weighting the calculated for outcomes that involve counts of
patient records of the hospital by the regression occurrences (for example, deaths or complications).
coefficients associated with the corresponding Interpretation of the index relates the experience
terms in the prediction models and intercept term. of the comparison population relative to a specified
This method produces the expected probability event to the expected experience based on the
of a complication for each patient based on the normative population.
experience of the norm for patients with similar
characteristics. After assigning the predicted Examples:
probability of a complication for each patient in
each risk group, it is then possible to aggregate the 10 events observed ÷ 10 events expected = 1.0:
patient-level data across a variety of groupings37 - 40, The observed number of events is equal to the
including health system, hospital, service line, expected number of events based on the
or MS-DRG classification. This model accounts normative experience
for only patient conditions that are present on
admission when calculating risk. Additionally, in 10 events observed ÷ 5 events expected = 2.0:
response to the transition to ICD-10-CM, diagnosis The observed number of events is twice the
and procedure codes, and the interactions among expected number of events based on the normative
them, have been mapped to AHRQ CCS categories experience
for assignment of risk instead of using the individual
diagnosis, procedure, and interaction effects. 10 events observed ÷ 25 events expected = 0.4:
The observed number of events is 60% lower
Index interpretation than the expected number of events based on the
An outcome index is a ratio of an observed normative experience
number of outcomes to an expected number of
outcomes in a population. This index is used to Therefore, an index value of 1.0 indicates no
make normative comparisons and is standardized difference between observed and expected
in that the expected number of events is based outcome occurrence. An index value greater than
on the occurrence of the event in a normative 1.0 indicates an excess in the observed number
population. The normative population used to of events relative to the expected based on the
calculate expected numbers of events is selected normative experience. An index value of less than
to be similar to the comparison population with 1.0 indicates fewer events observed than would
respect to relevant characteristics, including age, be expected based on the normative experience.
sex, region, and case mix. An additional interpretation is that the difference
between 1.0 and the index is the percentage
difference in the number of events relative to the
norm. In other words, an index of 1.05 indicates
5% more outcomes, and an index of 0.90 indicates
10% fewer outcomes than expected based on
the experience of the norm. The index can be
calculated across a variety of groupings (for
example, hospital or service line).

48 IBM Watson Health


Healthcare-associated infections In addition to the SIR values for each HAI, CMS
Healthcare-associated infections (HAIs), as publishes the observed and expected values, as
developed by the National Healthcare Safety well as a population count (days or procedures),
Network* (NHSN) and reported by CMS in the which varies by measure**. To aggregate the
public Hospital Compare data set, capture new member hospital HAI data to system-level for
information about the quality of inpatient care. ranking, we sum the observed and expected values,
Tracking and intervening to reduce infection rates and counts.
for methicillin-resistant staphylococcus aureus
(MRSA), central line-associated blood stream Note: Any valid member hospital data for individual
infections (CLABSI), catheter-associated urinary HAIs is included in development of the system-
tract infection (CAUTI), clostridium difficile colitis level HAIs.
(C. diff), and other problematic infections must be
reported to CMS. New public data will allow the At the systemlevel, we normalize the expected
development of national benchmarks for use by values for each HAI by multiplying them by the
hospital leadership to affect change. ratio of the sum of observed to sum of expected
values for the system comparison group. We
calculate a normalized z-score for each HAI, for
HAI measures
each system, using the observed, normalized
HAI-1 CLABSI in ICUs and select wards
expected and count.
HAI-2 CAUTI in intensive care units (ICUs) and select wards
HAI-3 Surgical site infection (SSI): colon
To develop a composite HAI measure, we believe it
HAI-4 Surgical site infection from abdominal hysterectomy is not appropriate to simply “roll up” observed and
(SSI: hysterectomy)
expected values across the different HAIs because
HAI-5 Methicillin-resistant staphylococcus aureus (MRSA) blood
laboratory-identified events (bloodstream infections) the overall observed-to-expected ratio would be
HAI-6 C. diff laboratory-identified events (intestinal infections) weighted by the rates for each HAI, which could be
quite different, and the HAIs are also likely to be
distributed differently from hospital to hospital. For
The HAI measures are reported as risk-adjusted
these reasons, we calculate an unweighted mean
standardized infection ratios (SIRs) using
of the normalized z-scores as the composite HAI
probability models and normative data sets
measure used for ranking systems.
maintained by a branch of the Centers for
Disease Control and Prevention (CDC), the NHSN.
Along with reporting SIR data to CMS, NHSN is
responsible for administering HAI surveillance
procedures and reporting specifications, along with
producing software and training programs for all
participating hospitals. Its underlying methodology
details for building the SIR are documented and
updated annually in a reference guide posted at the
CDC website41.

* See blog.eoscu.com/blog/what-is-the-national-healthcare-safety-network for more information.


** CLABSI: device days, CAUTI: urinary catheter days, SSI colon: procedures, SSI hysterectomy: procedures, MRSA: patient days, C. diff: patient days.

49
Data note relating to the A system is excluded from the study if it does not
July 2016 Hospital Compare have the HAIs required for its comparison group.
performance period See table below.
(July 1, 2012 - June 30, 2015):
The pneumonia measure HAIs by compare group
cohort was expanded to Compare group Required HAIs
include principal discharge
Large systems HAI-1, HAI-2, HAI-3, HAI-5, HAI-6
codes for sepsis and aspiration
Medium systems HAI-1, HAI-2, HAI-3, HAI-5, HAI-6
pneumonia. This resulted
Small systems HAI-1, HAI-2, HAI-6
in a significant increase in
pneumonia 30-day mortality
rates nationally, beginning with Note: The required HAIs were selected based on
the 2015 data year. an analysis of the completeness of data available
for each HAI in each system comparison group.

For reporting, we calculate an unweighted mean


of the individual system-level HAI observed-to-
expected (o/e) values. If no value was available
for a measure, the composite measure represents
the mean of available measures, as long as the
system had the minimum required HAIs for its
comparison group. For each HAI, the o/e can be
viewed as a unitless measure that is essentially a
percent difference; that is, observed-to-expected
ratio minus 1 x 100 = percent difference, which
is unbiased by differences in the rates by HAI or
distributions of HAIs by hospital.

30-day risk-adjusted mortality rates and 30-day


risk-adjusted readmission rates
This study currently includes two extended
outcome measures (30-day mortality and 30-
day readmissions), as developed by CMS and
published in the Hospital Compare data set. CMS is
reporting three-year rolling data periods, with the
most current data set being July 1, 2013 - June
30, 2016. The Hospital Compare website and
database were created by CMS, the US Department
of Health and Human Services, and other members
of the Hospital Quality Alliance. The data on the
website comes from hospitals that have agreed
to submit quality information that will be made
public. Both measures used in this study have been
endorsed by the National Quality Forum (NQF).

50 IBM Watson Health


CMS calculates the 30-day mortality and 30-day Length-of-stay methodologies
readmission rates from Medicare enrollment Watson Health has developed a severity-adjusted
and claims records using statistical modeling resource demand model that can be applied to
techniques that adjust for patient-level risk coded patient claims data to estimate the expected
factors and account for the clustering of patients LOS, given various patient-related factors42. We
within hospitals. Only Medicare fee-for-service exclude long-term care, psychiatric, substance
records are included. We are including 30-day abuse, rehabilitation, and federally owned or
mortality rates for acute myocardial infarction controlled facilities. In addition, we exclude certain
(AMI), heart failure (HF), pneumonia, chronic patient records from the data set: psychiatric;
obstructive pulmonary disease (COPD), and stroke substance abuse; unclassified cases (MS-DRGs
patients, and 30-day readmission rates for AMI, HF, 945, 946, and 999); cases in which patient age was
pneumonia, elective total hip and knee arthroplasty less than 65 years; and cases in which a patient
(THA/TKA), COPD, and stroke patients. was transferred to another short-term, acute care
hospital. Palliative care patients (Z515; V66.7) are
The individual CMS mortality models estimate included in the LOS model, which is calibrated to
hospital-specific, risk-standardized, all-cause 30- predict expected LOS for these patients.
day mortality rates for patients hospitalized with a
principal diagnosis of AMI, HF, pneumonia, COPD, Note: We are no longer able to exclude all
or stroke. All-cause mortality is defined as death rehabilitation patients, as we have done in the
from any cause within 30 days after the admission past, because the ICD-10-CM coding system does
date, regardless of whether the patient dies while not identify rehabilitation patients. We can only
still in the hospital or after discharge. exclude those patients coded as being in a PPS-
exempt hospital rehabilitation unit (provtype =
The individual CMS readmission models estimate R or T).
hospital-specific, risk-standardized, all-cause
30-day readmission rates for patients discharged Our severity-adjusted resource demand model
alive to a non-acute care setting with a principal allows us to produce risk-adjusted performance
diagnosis of AMI, HF, pneumonia, THA/TKA, COPD, comparisons on LOS between or across subgroups
or stroke. Patients may have been readmitted of inpatients. These patient groupings can be based
back to the same hospital, to a different hospital, on factors such as clinical groupings, hospitals,
or to another acute care facility. They may have product lines, geographic regions, and physicians.
been readmitted for the same condition as their This regression model adjusts for differences in
recent hospital stay or for a different reason diagnosis type and illness severity, based on ICD-
(CMS has indicated this is to discourage hospitals 9-CM coding. It also adjusts for patient age, gender,
from coding similar readmissions as different and admission status. Its associated LOS weights
readmissions). All readmissions that occur within allow group comparisons on a national level and
30 days of a discharge to a non-acute care setting in a specific market area. In response to the
are included, with a few exceptions. CMS does not transition to ICD-10-CM, diagnosis, procedure,
count planned admissions (obstetrical delivery, and interaction codes have been mapped to AHRQ
transplant surgery, maintenance chemotherapy, CCS categories for severity assignment instead
rehabilitation, and non-acute admissions for a of using the individual diagnosis, procedure, and
procedure) as readmissions. interaction effects.

51
POA coding allows us to estimate appropriate ED throughput measures
adjustments to LOS weights based on pre-existing ED-1b Median time from ED arrival to ED departure for admitted
conditions. Complications that occurred during the ED patients

hospital stay are not considered in the model. We OP-18b Median time from ED arrival to ED departure for discharged
ED patients
calculate expected values from model coefficients
that are normalized to the clinical group and
transformed from log scale. Medicare spend per beneficiary index
The Medicare spend per beneficiary (MSPB) index
is included as a proxy for episode-of-care cost
Emergency department throughput measure efficiency for hospitalized patients. CMS develops
We have included two ED throughput measures and publishes this risk-adjusted index in the public
from the CMS Hospital Compare data set. The Hospital Compare data sets, and in FFY 2015,
hospital ED is an access point to healthcare began to include it in the Hospital Value-Based
for many people. A key factor in evaluating ED Purchasing program. The CMS-stated reason for
performance is process “throughput,” measures including this measure is “… to reward hospitals
of timeliness with which patients are seen by that can provide efficient care at a lower cost to
a provider, receive treatment, and either are Medicare”43.
admitted or discharged. Timely ED processes may
impact both care quality and the quality of the The MSPB index evaluates hospitals’ efficiency
patient experience. We chose to include measures relative to the efficiency of the median hospital,
that define two ED processes: median time from ED nationally. Specifically, the MSPB index assesses
arrival to ED departure for admitted patients, and the cost to Medicare of services performed by
median time from ED arrival to ED departure for hospitals and other healthcare providers during
non-admitted patients. an MSPB episode, which comprises the period
three days prior to, during, and 30 days following
For this study’s measure, we used 2016 data from a patient’s hospital stay. Payments made by
CMS Hospital Compare. Hospitals are required Medicare and the beneficiary (that is, allowed
to have reported both ED measures or they are charges) are counted in the MSPB episode as
excluded from the study. Our ranked metric is the long as the start of the claim falls within the
calculated mean of the two included measures. episode window. IPPS outlier payments (and
outlier payments in other provider settings) are
Hospitals participating in the CMS Inpatient also included in the calculation of the MSPB index.
Quality Reporting and Outpatient Quality Reporting The index is available for Medicare beneficiaries
Programs report data for any eligible adult ED enrolled in Medicare Parts A and B who were
patients, including Medicare patients, Medicare discharged from short-term, acute care hospitals
managed care patients, and non-Medicare patients. during the period of performance. Medicare
Submitted data can be for all eligible patients or a Advantage enrollees are not included. This
sample of patients, following CMS sampling rules. measure excludes patients who died during
the episode.

52 IBM Watson Health


The MSPB index is calculated by dividing the HCAHPS was developed through a partnership
profiled hospital’s risk-adjusted average episode between CMS and AHRQ that had three
cost by the national hospital median. The broad goals:
profiled hospital’s MSPB amount is the sum of
–– Produce comparable data on patients’
standardized, risk-adjusted spending across all a
perspectives of care that allow objective and
hospital’s eligible episodes divided by the number
meaningful comparisons among hospitals on
of episodes for that hospital. This is divided by
topics that may be important to consumers
the median MSPB amount across all episodes
nationally. CMS adjusts spending amounts for area –– Encourage public reporting of the survey
price variation and various risk factors including results to create incentives for hospitals to
case mix, age, and hierarchical condition category improve quality of care
(HCC) indicators.
–– Enhance public accountability in healthcare by
increasing the transparency of the quality of
To calculate the system-level MSPB index, we
hospital care provided in return for the public
multiply each member hospital MSPB by the
investment
hospital’s MEDPAR discharges for the most current
year included in the study. This produces each
The HCAHPS survey has been endorsed by the
hospital’s weighted MSPB index.
NQF and the Hospital Quality Alliance. The federal
government’s Office of Management and Budget
To calculate the MSPB index for each health system,
has approved the national implementation of
we sum the member hospital weighted MSPBs,
HCAHPS for public reporting purposes.
sum the member hospital MEDPAR discharges,
then divide the sum of the weighted MSPBs by the
Voluntary collection of HCAHPS data for public
sum of the discharges. This produces the health
reporting began in October 2006. The first public
system mean weighted MSPB index, which is the
reporting of HCAHPS results, which encompassed
ranked measure in the study.
eligible discharges from October 2006 through
June 2007, occurred in March 2008. HCAHPS
results are posted on the Hospital Compare
Hospital Consumer Assessment of Healthcare
website, found at medicare.gov/hospitalcompare.
Providers and Systems overall hospital rating
A downloadable version of HCAHPS results
To measure patient perception of care, this is available.
study uses the Hospital Consumer Assessment
of Healthcare Providers and Systems (HCAHPS)
patient survey. HCAHPS is a standardized survey
instrument and data collection methodology for
measuring patients’ perspectives on their hospital
care. HCAHPS is a core set of questions that can be
combined with customized, hospital-specific items
to produce information that complements the
data hospitals currently collect to support internal
customer service and quality-related activities.

53
The HCAHPS data is adjusted by CMS for both To calculate the HCAHPS Score for each system, we
survey mode (phone, web, or mail survey) and multiply each member hospital’s HCAHPS score
the patient mix at the discharging facility, since by the hospital’s MEDPAR discharges for the most
respondents randomized to the phone mode tend current year included in the study. This produces
to provide more positive evaluations about their each hospital’s weighted HCAHPS score, which is
care experience than those randomized to the the ranked measure in this study.
mail survey mode. Details on this adjustment’s
parameters are available for all facilities with each To calculate the HCAHPS score for each health
quarterly update, at hcahpsonline.org. system, we sum the member hospital weighted
HCAHPS scores, sum the member hospital
Although we report health system performance on MEDPAR discharges, then divide the sum of the
all HCAHPS questions, only performance on the weighted HCAHPS scores by the sum of the
overall hospital rating question, “How do patients discharges. This produces the health system mean
rate the hospital, overall?” is used to rank system weighted HCAHPS score, which is the ranked
performance. measure in the study.

At the hospital level, patient responses fall into We apply this same methodology to each individual
three categories, and the number of patients in HCAHPS question to produce mean weighted
each category is reported as a percent: HCAHPS scores for the systems. These values are
reported for information only in the study.
–– Patients who gave a rating of 6 or lower (low)
–– Patients who gave a rating of 7 or 8 (medium)
Methodology for financial performance measures
–– Patients who gave a rating of 9 or 10 (high)
Data sources
For each answer category, we assign a weight as The financial measures included in this study are
follows: 3 equals high or good performance, 2 sourced from the annual audited, consolidated
equals medium or average performance, and 1 financial statements of in-study health systems,
equals low or poor performance. We then calculate when they are publicly available. Consolidated
a weighted score for each hospital by multiplying balance sheets and consolidated statements of
the HCAHPS answer percent by the category operations are used. 65.1% of all in-study health
weight. For each hospital, we sum the weighted systems* had publicly available audited financial
percent values for the three answer categories. statements for 2016. This included data for 84.2%
of parent and other independent health systems,
while audited financials were generally not
available for subsystems (1.3%). Audited financial
statements are obtained from the following sites:
–– Electronic Municipal Market Access
–– DAC Bond
–– US Securities and Exchange Commission

* We include subsystems in our study, as well as their parent systems, and independent systems with no subsystems. Subsystems generally are included in the parent
organization statements.

54 IBM Watson Health


Performance measure normalization For the LOS measure, we base our ranking on the
The inpatient mortality, complications, and LOS normalized, severity-adjusted LOS index expressed
measures are normalized based on the in-study in days. This index is the ratio of the observed and
population, by comparison group, to provide a the normalized expected values for each health
more easily interpreted comparison among health system. We normalize the individual system’s
systems. We assign each health system in the expected values by multiplying them by the ratio of
study to one of three comparison groups based the observed to expected values for its comparison
on the sum of member hospitals’ total operating group. The system’s normalized index is then
expense. (Detailed descriptions of the comparison calculated by dividing the system’s observed value
groups can be found in the Methodology section of by its normalized expected value. We convert this
this document.) normalized index into days by multiplying by
the average LOS of all in-study systems (grand
For the mortality and complications measures, mean LOS).
we base our ranking on the difference between
observed and expected events, expressed in
Differences between current and trend profiles
standard deviation units (z-scores) that have been
normalized. We normalize the individual health Normalization
system expected values by multiplying them by The 2016 values on the current and trend
the ratio of the observed to expected values for graphs will not match for inpatient mortality,
their comparison group. We then calculate the complications, or average LOS. This is because
normalized z-score based on the observed and we use different norm factors to normalize the
normalized expected values and the patient count. expected values.
–– Current profile: We combine in-study systems’
For the HAI measures, we base our ranking on
data for only the most current study year
the unweighted mean of the normalized z-scores
to calculate each comparison group norm
for the included HAIs. Included HAIs vary by
factor (observed/expected). Note: The current
comparison group. See page 50 for details. We
study year was comprised of 2015 and 2016
normalize the individual system expected values
MEDPAR data for inpatient morality and
for each HAI by multiplying them by the ratio of the
complications, and 2016 data only for
observed to expected values for their comparison
average LOS.
group for that HAI. We calculated a normalized
z-score for each HAI, for each health system, using ––Trend profile: We combine in-study systems’
the observed, normalized expected and count. data for all five study years to calculate each
comparison group norm factor.

55
In-study system counts Why we have not calculated percent change in
There are fewer in-study systems in the trend specific instances
profile than the current profile because some Percent change is a meaningless statistic when
systems do not have enough data points for one the underlying quantity can be positive, negative,
or more measures to calculate trend, so they or zero. The actual change may mean something,
are excluded. Three data points are required to but dividing it by a number that may be zero or of
calculate the t-statistic of the regression line, the opposite sign does not convey any meaningful
which is the ranked metric. information because the amount of change is not
–– Additional impact on average LOS calculation: proportional to its previous value.
The observed/normalized expected LOS index
for each system is converted into an average We also do not report percent change when the
LOS in days by multiplying it by the mean metrics are already percentages. In these cases,
average LOS for all in-study systems (sum we report the simple difference between the two
observed LOS/in-study system count). The percentage values.
grand mean average LOS will be different in
current and trend profiles when there are
different numbers of in-study systems. Protecting patient privacy
We do not report any individual measure data that
Both the current and trend profiles are internally is based on 11 or fewer patients, as required by
consistent. They each provide relevant comparisons CMS. This is applicable to the following measures:
of a profiled health system’s performance versus
–– Risk-adjusted inpatient mortality index
peers and national benchmarks.
–– Risk-adjusted complications index
–– 30-day mortality rates for AMI, HF, pneumonia,
COPD, and stroke (CMS does not report a rate
when count is less than 25)
–– 30-day readmission rates for AMI, HF,
pneumonia, THA/TKA, COPD, and stroke
(CMS does not report a rate when count is less
than 25)
–– Average LOS

56 IBM Watson Health


Appendix D:
Health system name Location
Baptist Health South Florida Coral Gables, FL

All health systems


Baptist Healthcare System (KY) Louisville, KY
Baptist Memorial Health Care Corp Memphis, TN

in study
BayCare Health System Clearwater, FL
Bayhealth Dover, DE
Baylor Scott & White Health Dallas, TX
Health system name Location Baystate Health Springfield, MA
Abrazo Community Health Network Phoenix, AZ Beacon Health System South Bend, IN
Adventist Florida Hospital Orlando, FL Beaumont Health Royal Oak, MI
Adventist Health Central Valley Network Hanford, CA BJC HealthCare Saint Louis, MO
Adventist Health System Altamonte Springs, FL Bon Secours Health System Marriottsville, MD
Adventist Health West Roseville, CA Bronson Healthcare Group Kalamazoo, MI
Adventist Healthcare Rockville, MD Brookwood Baptist Health Birmingham, AL
Advocate Health Care Downers Grove, IL Broward Health Fort Lauderdale, FL
AHMC Healthcare Alhambra, CA Cape Cod Healthcare Hyannis, MA
Alameda Health System Alameda, CA Cape Fear Valley Health System Fayetteville, NC
Alecto Healthcare Services Irvine, CA Capella Healthcare Franklin, TN
Alegent Creighton Health Omaha, NE Capital Health System Trenton, NJ
Alexian Brothers Health System Arlington Heights, IL Care New England Health System Providence, RI
Allegheny Health Network Pittsburgh, PA CareGroup Healthcare System Boston, MA
Allina Health System Minneapolis, MN CarePoint Health Bayonne, NJ
Alta Hospitals System Los Angeles, CA Carilion Clinic Roanoke, VA
Anderson Regional Health System Meridian, MS Carolinas HealthCare System Charlotte, NC
Appalachian Regional Healthcare (ARH) Lexington, KY Carondelet Health Network Tuscon, AZ
Ardent Health Services Nashville, TN Catholic Health Buffalo, NY
Asante Medford, OR Catholic Health Initiatives Denver, CO
Ascension Health St. Louis, MO Catholic Health Services of Long Island Rockville Centre, NY
Aspirus Wausau, WI Centegra Health System Crystal Lake, IL
Atlantic Health System Morristown, NJ Centra Health Lynchburg, VA
Aurora Health Care Milwaukee, WI Central Florida Health Leesburg, FL
Avanti Hospitals El Segundo, CA Centura Health Englewood, CO
Avera Health Sioux Falls, SD CHI Franciscan Health Tacoma, WA
Banner Health Phoenix, AZ CHI St. Joseph Health Bryan, TX
Baptist Health Montgomery, AL CHI St. Luke's Health Houston, TX
Baptist Health Little Rock, AR CHI St. Vincent Little Rock, AR
Baptist Health Care Pensacola, FL Christus Health Irving, TX
Baptist Health of Northeast Florida Jacksonville, FL Citrus Valley Health Partners Covina, CA

Note: This year’s 15 Top Health Systems (2018) are in bold, blue text.

57
Health system name Location Health system name Location
Cleveland Clinic Cleveland, OH Good Shepherd Health System Longview, TX
Columbia Health System Milwaukee, WI Greater Hudson Valley Health System Middletown, NY
Community Foundation of Northwest Indiana Munster, IN Greenville Health System Greenville, SC
Community Health Network Indianapolis, IN Guthrie Healthcare System Sayre, PA
Community Health Systems Franklin, TN Hartford HealthCare Hartford, CT
Community Hospital Corp Plano, TX Hawaii Health Systems Corporation Honolulu, HI
Community Medical Centers Fresno, CA Hawaii Pacific Health Honolulu, HI
Conemaugh Health System Johnstown, PA HCA Capital Division Richmond, VA
Covenant Health Knoxville, TN HCA Central and West Texas Division Austin, TX
Covenant Health Systems (Northeast) Syracuse, NY HCA Continental Division Denver, CO
CoxHealth Springfield, MO HCA East Florida Division Ft. Lauderdale, FL
Crozer-Keystone Health System Springfield, PA HCA Far West Division Las Vegas, NV
Dartmouth Hitchcock Health Lebanon, NH HCA Gulf Coast Division Houston, TX
DCH Health System Tuscaloosa, AL HCA Healthcare Nashville, TN
DeKalb Regional Healthcare System Decatur, GA HCA MidAmerica (North) Kansas City, MO
Detroit Medical Center Detroit, MI HCA MidAmerica (South) Kansas City, MO
Dignity Health San Francisco, CA HCA Mountain Division Salt Lake City, UT
Dimensions Health Corporation Cheverly, MD HCA North Florida Division Tallahassee, FL
Duke LifePoint Durham, NC HCA North Texas Division Dallas, TX
Duke University Health System Durham, NC HCA San Antonio Division San Antonio, TX
East Texas Medical Center Regional Tyler, TX HCA South Atlantic Division Charleston, SC
Healthcare System
HCA Tristar Division Nashville, TN
Eastern Connecticut Health Network Manchester, CT
HCA West Florida Division Tampa, FL
Eastern Maine Healthcare Systems Brewer, ME
Health First Rockledge, FL
Edward Elmhurst Health Naperville, IL
Health Group of Alabama Huntsville, AL
Einstein Healthcare Network Philadelphia, PA
Health Quest System Poughkeepsie, NY
Emory Healthcare Atlanta, GA
HealthEast Care System Saint Paul, MN
Essentia Health Duluth, MN
HealthPartners Bloomington, MN
Excela Health Greensburg, PA
Henry Ford Health System Detroit, MI
Fairview Health Services Minneapolis, MN
Heritage Valley Health System Beaver, PA
Forrest Health Hattiesburg, MS
HighPoint Health System Gallatin, TN
Franciscan Health Mishawaka, IN
Hillcrest HealthCare System Tulsa, OK
Franciscan Missionaries of Our Lady Baton Rouge, LA
Health System HonorHealth Scottsdale, AZ

Franciscan Sisters of Christian Charity Manitowoc, WI Hospital Sisters Health System Springfield, IL

Froedtert & the Medical College of Wisconsin Milwaukee, WI Houston Healthcare Warner Robins, GA

Geisinger Health System Danville, PA Houston Methodist Houston, TX

Genesis Health System Davenport, IA IASIS Healthcare Franklin, TN

Note: This year’s 15 Top Health Systems (2018) are in bold, blue text.

58 IBM Watson Health


Health system name Location Health system name Location
Indiana University Health Indianapolis, IN Memorial Healthcare System Hollywood, FL
Infirmary Health Systems Mobile, AL Memorial Hermann Health System Houston, TX
Inova Health System Falls Church, VA MemorialCare Health System Fountain Valley, CA
Inspira Health Network Vineland, NJ Mercy Chesterfield, MO
Integris Health Oklahoma City, OK Mercy Health - Cincinnati Cincinnati, OH
Intermountain Health Care Salt Lake City, UT Mercy Health Muskegon, MI
Jefferson Health Philadelphia, PA Mercy Health - Toledo Toledo, OH
John D. Archbold Memorial Hospital Thomasville, GA Mercy Health - Youngstown Youngstown, OH
John Muir Health Walnut Creek, CA Mercy Health (OH) Cincinnati, OH
Johns Hopkins Health System Baltimore, MD Mercy Health Network Des Moines, IA
KentuckyOne Health Lexington, KY Mercy Health System of Southeastern Philadelphia, PA
Pennsylvania
Kettering Health Network Dayton, OH
Meridia Health System Cleveland, OH
KPC Healthcare, Inc. Santa Ana, CA
Meridian Health Neptune, NJ
Lafayette General Health Lafayette, LA
Methodist Health System Dallas, TX
Lahey Health System Burlington, MA
Methodist Healthcare Memphis, TN
LCMC Health New Orleans, LA
MidMichigan Health Midland, MI
Lee Memorial Health System Fort Myers, FL
Ministry Health Care Milwaukee, WI
Legacy Health Portland, OR
Mission Health Asheville, NC
Lehigh Valley Health Network Allentown, PA
Montefiore Health System Bronx, NY
LifeBridge Health Baltimore, MD
Mount Carmel Health System Columbus, OH
LifePoint Health Brentwood, TN
Mount Sinai Health System New York, NY
Lifespan Corporation Providence, RI
Mountain States Health Alliance Johnson City, TN
Los Angeles County-Department of Los Angeles, CA
Health Services MultiCare Health System Tacoma, WA
Lourdes Health System Camden, NJ Munson Healthcare Traverse City, MI
Lovelace Health System Albuquerque, NM Nebraska Medicine Omaha, NE
Loyola Medicine Maywood, IL Nebraska Methodist Health System Omaha, NE
Main Line Health Bryn Mawr, PA New York City Health and Hospitals New York, NY
Corporation (HHC)
MaineHealth Portland, ME
New York-Presbyterian Healthcare System New York, NY
Mary Washington Healthcare Fredericksburg, VA
North Mississippi Health Services Tupelo, MS
Maury Regional Health Columbia, TN
Northern Arizona Healthcare Flagstaff, AZ
Mayo Foundation Rochester, MN
Northside Hospital System Atlanta, GA
McLaren Health Care Corp Flint, MI
Northwell Health Great Neck, NY
McLeod Health Florence, SC
Northwestern Medicine Chicago, IL
MediSys Health Network Jamaica, NY
Novant Health Winston Salem, NC
MedStar Health Columbia, MD
Ochsner Health System New Orleans, LA
Memorial Health System Springfield, IL
Ohio Valley Health Services Wheeling, WV

Note: This year’s 15 Top Health Systems (2018) are in bold, blue text.

59
Health system name Location Health system name Location
OhioHealth Columbus, OH Samaritan Health Services Corvallis, OR
Orlando Health Orlando, FL Sanford Health Sioux Falls, SD
OSF Healthcare System Peoria, IL SCL Denver Region Denver, CO
Palmetto Health Columbia, SC SCL Health Denver, CO
Palomar Health Escondido, CA Scripps Health San Diego, CA
Parkview Health Fort Wayne, IN Sentara Healthcare Norfolk, VA
Partners HealthCare Boston, MA Seton Healthcare Family Austin, TX
PeaceHealth Vancouver, WA Sharp HealthCare San Diego, CA
Penn Highlands Healthcare DuBois, PA Sinai Health System Chicago, IL
Penn Medicine Philadelphia, PA Sisters of Charity Health System Cleveland, OH
Phoebe Putney Health System Albany, GA Skagit Regional Health Mount Vernon, WA
Physicians for Healthy Hospitals Hemet, CA Southeast Georgia Health System Brunswick, GA
Piedmont Healthcare Atlanta, GA SoutheastHEALTH Cape Girardeau, MO
PIH Health Whittier, CA Southern Illinois Healthcare Carbondale, IL
Premier Health Dayton, OH Sparrow Health System Lansing, MI
Presbyterian Healthcare Services Albuquerque, NM Spartanburg Regional Healthcare System Spartanburg, SC
Presence Health Chicago, IL Spectrum Health Grand Rapids, MI
Prime Healthcare Services Ontario, CA SSM Health Saint Louis, MO
ProHealth Care Waukesha, WI St. Charles Health System Bend, OR
ProMedica Health System Toledo, OH St. Elizabeth Healthcare Fort Thomas, KY
Providence Health & Services Renton, WA St. John Health System Tulsa, OK
Queens Health System Honolulu, HI St. John Providence Health Detroit, MI
Quorum Health Corporation Brentwood, TN St. Joseph Health System Irvine, CA
RCCH HealthCare Partners Brentwood, TN St. Joseph/Candler Health System Savannah, GA
Regional Health Rapid City, SD St. Luke's Health System Boise, ID
Renown Health Reno, NV St. Mary's Health Care System Athens, GA
Riverside Heath System Newport News, VA St. Peters Health Partners Albany, NY
Rochester Regional Health Rochester, NY St. Vincent’s Health System Birmingham, AL
Roper St. Francis Healthcare Charleston, SC St. Vincent's Healthcare Jacksonville, FL
RWJBarnabas Health West Orange, NJ Steward Health Care System Boston, MA
Sacred Heart Health System Pensacola, FL Success Health Boca Raton, FL
Saint Alphonsus Health System Boise, ID Sutter Health Sacramento, CA
Saint Francis Health System Tulsa, OK Sutter Health Bay Area Sacramento, CA
Saint Joseph Mercy Health System Ann Arbor, MI Sutter Health Valley Area Sacramento, CA
Saint Joseph Regional Health System Mishawaka, IN Swedish Seattle, WA
Saint Luke’s Health System Kansas City, MO Tanner Health System Carrollton, GA
Saint Thomas Health Nashville, TN Temple University Health System Philadelphia, PA

Note: This year’s 15 Top Health Systems (2018) are in bold, blue text.

60 IBM Watson Health


Health system name Location Health system name Location
Tenet California Anaheim, CA Verity Health System Los Altos Hills, CA
Tenet Central Dallas, TX Via Christi Health Wichita, KS
Tenet Florida Fort Lauderdale, FL Vidant Health Greenville, NC
Tenet Healthcare Corporation Dallas, TX Virtua Health Marlton, NJ
Tenet Southern Atlanta, GA WakeMed Raleigh, NC
Texas Health Resources Arlington, TX Wellmont Health System Kingsport, AL
The Manatee Healthcare System Bradenton, FL WellSpan Health York, PA
The University of Vermont Health Network Burlington, VT WellStar Health System Marietta, GA
The Valley Health System Las Vegas, NV West Tennessee Healthcare Jackson, TN
ThedaCare Appleton, WI West Virginia United Health System Fairmont, WV
TriHealth Cincinnati, OH Western Connecticut Health Network Danbury, CT
Trinity Health Livonia, MI Wheaton Franciscan Healthcare Waterloo, IA
Trinity Health Of New England Hartford, CT Wheaton Franciscan Southeast Wisconsin Glendale, WI
Truman Medical Center Kansas City, MO Willis-Knighton Health Systems Shreveport, LA
UAB Health System Birmingham, AL Yale New Haven Health Services New Haven, CT
UC Health Cincinnati, OH
UCHealth Aurora, CO
UM Upper Chesapeake Health Bel Air, MD
UMass Memorial Health Care Worcester, MA
UNC Health Care System Chapel Hill, NC
UnityPoint Health Des Moines, IA
UnityPoint Health - Trinity Moline, IL
Universal Health Services King of Prussia, PA
University Health System Shreveport, LA
University Hospitals Health System Cleveland, OH
University of California Health System Los Angeles, CA
University of Florida Health Gainesville, FL
University of Maryland Medical System Baltimore, MD
University of Mississippi Medical Center Jackson, MS
University of New Mexico Hospitals Albuquerque, NM
University of Rochester Medical Center Rochester, NY
University of Texas System Austin, TX
UPMC Health System Pittsburgh, PA
UPMC Susquehanna Health System Williamsport, PA
UT Southwestern Medical Center Dallas, TX
Valley Baptist Health System Harlingen, TX
Valley Health Winchester, VA

Note: This year’s 15 Top Health Systems (2018) are in bold, blue text.

61
Footnotes 10. Bonis PA, Pickens GT, Rind DM, 19. Chenoweth J, Foster DA, Waibel
Foster DA. Association of a clinical BC. Best Practices in Board Oversight of
1. Kaplan RS, Norton DP. The Balanced knowledge support system with Quality. The Governance Institute.
Scorecard: Measures That Drive improved patient safety, reduced June 2006.
Performance. Harvard Bus Rev, complications and shorter length of stay
Jan–Feb 1992. among Medicare beneficiaries in acute 20. Kroch E, Vaughn T, Koepke M,
care hospitals in the United States. Roman S, Foster DA, Sinha S, Levey S.
2. Shook J, Chenoweth J. 100 Top Int J Med Inform. 2008 Nov;77(11):745 - Hospital Boards and Quality Dashboards.
Hospitals CEO Insights: Adoption Rates 753. Epub June 2008. J Patient Safety. 2(1):10 - 19,
of Select Baldrige Award Practices March 2006.
and Processes. Ann Arbor, MI: Truven 11. Foster DA. HCAHPS 2008:
Health Analytics Center for Healthcare Comparison Results for 100 Top 21. Cejka Search and Solucient, LLC.
Improvement. October 2012. Hospitals Winners Versus Non-Winners. 2005 Hospital CEO Leadership Survey.
Ann Arbor, MI: Truven Health Analytics
3. Foster DA. Hospital System Center for Healthcare Improvement. 22. Health Research and Educational
Membership and Performance. Ann August 2008. Trust and Prybil, L. Governance in
Arbor, MI: Truven Health Analytics High-Performing Organizations: A
Center for Healthcare Improvement. 12. Foster DA. Risk-Adjusted Mortality Comparative Study of Governing Boards
May 2012. Index Methodology. Ann Arbor, MI: in Not-For-Profit Hospitals. Chicago:
Truven Health Analytics Center for HRET in Partnership with AHA. 2005.
4. HIMSS Analytics, Truven Health Healthcare Improvement. July 2008.
Analytics. 2012 HIMSS Analytics Report: 23. Griffith JR, Alexander JA, Jelinek
Quality and Safety Linked to Advanced 13. Foster DA. Trends in Patient Safety RC. Measuring Comparative Hospital
Information Technology Enabled Adverse Outcomes and 100 Top Performance. Healthc Manag. 2002
Processes. Chicago, IL: HIMSS Analytics. Hospitals Performance, 2000 - 2005. Jan - Feb; 47(1).
April 2012. Ann Arbor, MI: Truven Health Analytics
Center for Healthcare Improvement. 24. Griffith JR, Knutzen SR, Alexander
5. Foster DA, Chenoweth J. Comparison March 2008. JA. Structural Versus Outcomes
of Baldrige Award Applicants and Measures in Hospitals: A Comparison
Recipients With Peer Hospitals on a 14. Shook J, Young J. Inpatient and of Joint Commission and Medicare
National Balanced Scorecard. Ann Arbor, Outpatient Growth by Service Line: Outcomes Scores in Hospitals. Qual
MI: Truven Health Analytics Center for 2006 Truven Health 100 Top Hospitals: Manag Health Care. 2002; 10(2): 29 - 38.
Healthcare Improvement. October 2011. Performance Improvement Leaders
Versus Peer Hospitals. Ann Arbor, MI: 25. See the CMS website at
6. Young J. Outpatient Care Standards Truven Health Analytics Center for cms.gov/Medicare/Quality-Initiatives-
Followed More Closely at Top- Healthcare Improvement. August 2007. Patient-Assessment-Instruments/
Performing Hospitals. Ann Arbor, MI: HospitalQualityInits/OutcomeMeasures.
Truven Health Analytics Center for 15. Chenoweth J, Safavi K. Leadership html.
Healthcare Improvement. March 2011. Strategies for Reaching Top Performance
Faster. J Healthc Tech. January 2007. 26. See the CMS Hospital Compare
7. Young J. Hospitals Increase HCT Project Volume 4. website at hospitalcompare.hhs.gov.
Cardiovascular Core Measure
Compliance. Ann Arbor, MI: Truven 16. Griffith JR, Alexander JA, Foster 27. See the CMS website at
Health Analytics Center for Healthcare DA. Is Anybody Managing the Store? cms.gov/Medicare/Quality-Initiatives-
Improvement. November 2010. National Trends in Hospital Performance. Patient-Assessment-Instruments/
Healthc Manag. 2006 Nov–Dec; HospitalQualityInits/OutcomeMeasures.
8. Foster DA. Top Cardiovascular Care 51(6):392 - 405; discussion 405 - 406. html.
Means Greater Clinical and Financial
Value. Ann Arbor, MI: Truven Health 17. McDonagh KJ. Hospital Governing 28. Iezzoni L, Ash A, Shwartz M, Daley
Analytics Center for Healthcare Boards: A Study of Their Effectiveness in J, Hughes J, Mackiernan Y. Judging
Improvement. November 2009. Relation to Organizational Performance. Hospitals by Severity-Adjusted Mortality
Healthc Manag. 2006 Nov - Dec; 51(6). Rates: The Influence of the Severity-
9. Lee DW, Foster DA. The association Adjusted Method. Am J Public Health.
between hospital outcomes and 18. Bass K, Foster DA, Chenoweth J. 1996; 86(10):1379 - 1387.
diagnostic imaging: early findings. J Am Study Results — Proving Measurable
Coll Radiol. 2009 Nov; 6(11):780 - 785. Leadership and Engagement Impact on
Quality, CMS Invitational Conference on
Leadership and Quality. Sept 28, 2006.

62 IBM Watson Health


29. Iezzoni L, Shwartz M, Ash A, Hughes 38. Iezzoni LI, et al. Using
J, Daley J, Mackiernan Y. Using Severity- Administrative Data to Screen Hospitals
Adjusted Stroke Mortality Rates to for High Complication Rates. Inquiry. 31,
Judge Hospitals. Int J Qual Health C. no. 1 (Spring 1994): 40-55.
1995;7(2):81 - 94.
39. Iezzoni LI. Assessing Quality Using
30. DesHarnais SI, et al. Measuring Administrative Data. Ann Intern Med.
Hospital Performance: The Development 127, no. 8 (Oct 1997): 666-674.
and Validation of Risk-Adjusted
Indexes of Mortality, Readmissions, 40. Weingart SN, et al. Use of
and Complications. Med Car. 28, no. 12 Administrative Data to Find Substandard
(December 1990): 1127 - 1141. Care: Validation of the Complications
Screening Program. Med Care. 38, no. 8
31. Iezzoni LI, et al. Chronic Conditions (Aug 2000): 796-806.
and Risk of In-Hospital Death. Health
Serv Res. 29, no. 4 (October 1994): 41. NHSN Standardized Infection Ratio
435 - 460. (SIR): A Guide to the SIR, National
Healthcare Safety Network. Centers
32. Iezzoni LI, et al. Using for Disease Control and Prevention,
Administrative Data to Screen Hospitals National Center for Emerging and
for High Complication Rates. Inquiry. 31, Zoonotic Infectious Diseases, Division
no. 1 (Spring 1994): 40 - 55. of Healthcare Quality Promotion. July
2017. Available at cdc.gov/nhsn/pdfs/
33. Adams S, Cotton B, Wade C, Kozar R, ps-analysis-resources/nhsn-sir-guide.
Dipasupil E, Podbielski J, Gill B, Duke J, pdf. Last accessed Jan. 23, 2018.
Adams P, Holcomb J. Do Not Resuscitate
(DNR) Status, Not Age, Affects 42. Foster, D. Model-Based Resource
Outcomes After Injury: An Evaluation Demand Adjustment Methodology.
of 15,227 Consecutive Trauma Patients. Watson Health (Truven Health Analytics).
J Trauma Acute Care Surg. 2013 May; July 2012.
74(5): 1327 - 1330.
43. AHRQ Medicare spending
34. Iezzoni LI, et al: Identifying per beneficiary (MSPB) measure
Complications of Care Using summary: Cost to Medicare of services
Administrative Data. Med Care. 32, no.7 performed by hospitals and other
(Jul 1994): 700 - 715. healthcare providers during a MSPB
episode. October 2015. Available via
35. Elixhauser A, Steiner C, Palmer qualitymeasures.ahrq.gov/summaries.
L. Clinical Classifications Software
(CCS), 2014. US Agency for Healthcare
Research and Quality. Available via
hcup-us.ahrq.gov/toolssoftware/ccs/
ccs.jsp.

36. Adams S, Cotton B, Wade C, Kozar R,


Dipasupil E, Podbielski J, Gill B, Duke J,
Adams P, Holcomb J. Do Not Resuscitate
(DNR) Status, Not Age, Affects
Outcomes After Injury: An Evaluation
of 15,227 Consecutive Trauma Patients.
J Trauma Acute Care Surg. 2013 May;
74(5): 1327–1330.

37. Iezzoni LI, et al: Identifying


Complications of Care Using
Administrative Data. Med Care. 32, no.7
(Jul 1994): 700-715.

63
About IBM Watson Health™ IBM, the IBM logo, ibm.com, and
Each day, professionals make powerful Watson Health are trademarks of
progress toward a healthier future. At International Business Machines
IBM Watson Health, we help remove Corp., registered in many jurisdictions
obstacles, optimize their efforts, and worldwide. Other product and service
For more information reveal powerful new insights so they names might be trademarks of IBM or
can transform health for the people other companies.
Visit 100tophospitals.com, call they serve. Working across the
800-525-9083 option 4, or send an landscape, from payers and providers to A current list of IBM trademarks is
government and life sciences, we bring available on the web at “Copyright and
email to 100tophospitals@us.ibm.com. together deep health expertise, proven trademark information” at:
innovation, and the power of artificial ibm.com/legal/copytrade.shtml.
intelligence to enable our clients
to uncover, connect, and act on the This document is current as of the initial
insights that advance their work — and date of publication and may be changed
change the world. by IBM at any time. Not all offerings
are available in every country in which
© Copyright IBM Corporation 2018 IBM operates.
IBM Corporation
Software Group The information in this document is
Route 100 provided “as is” without any warranty,
Somers, NY 10589 express or implied, including without
ibm.com/watsonhealth any warranties of merchantability,
800-525-9083 fitness for a particular purpose and
any warranty or condition of non-
Produced in the United States of infringement.
America April 2018
IBM products are warranted
according to the terms and conditions
of the agreements under which they
are provided.

TOP 18863 0418

Transformational healthcare solutions


from Truven Health Analytics®,
now offered by IBM Watson Health.

Das könnte Ihnen auch gefallen