DM Standards Off and Crawling
DM Standards Off and Crawling
MANAGED CARE February 2004. ©MediMedia USA
When American Healthways posted a collaborative methodology on its Web site, not everyone in the industry applauded.
One year ago, Victor Villagra had good reason to believe the disease management industry was on the threshold of embracing a standard method for evaluating just how well DM programs work.
Villagra, the former president of the Disease Management Association of America (DMAA), had responded to an invitation to join 150 doctors and other health care professionals in Nashville to inspect the "Standard Outcomes Metrics and Evaluation Methodology for Disease Management Programs" assembled by a group of researchers at the prestigious Johns Hopkins University in collaboration with American Healthways.
With some input from attendees, the results were publicly posted on the American Healthways Web site and all were invited to hear Villagra, now president of Health & Technology Vector Inc., give this outspoken blessing: "This document will become an invaluable reference for private and public payers, consultants and disease management organizations seeking both methodological rigor and real-world practicality."
Sense of urgency
Adding to the urgency of the metrics summit was a deep-seated belief (subsequently justified by Medicare reform) of many of the attendees that if DM companies weren't able to seize the high ground on methodology, an inquisitive Medicare agency that had already begun to probe the workings of disease management would wind up doing the job for them — which could present a host of unexpected challenges.
A year later, though, DM vendors around the country remain deeply skeptical of the jointly developed standard and even less likely to identify themselves with an industry standard than in the months leading up to that meeting in Desert Springs, Calif., where Villagra made his hopeful statement.
The one point everyone now seems ready to agree on is that the American Healthways' meeting marked an important milestone along a still unmapped journey to industry standardization. Who will lead the way forward, and what direction they will take, remain to be seen.
"I'm not sure that the industry is at a point in its development in which it can adopt a single standard for how to measure outcomes — largely because we are only now at a point where we have multiyear outcomes on the same populations that we can study and report on," says Christobel Selecky, CEO of LifeMasters Supported Care and the president-elect of the Disease Management Association of America. "There is so much learning going on that it may be premature to adopt one single standard. In addition, we have customers who have their own ideas about how they want to measure standards."
Not on radar screens
Ken Mays, an analyst for Mathematica Policy Research, recently completed an exhaustive survey of DM practices around the country.
"For now, at least, large employers don't have it on their radar screens," says Mays. "From my perspective, we did not get any evidence that employers are aware of this activity. My sense is it's probably too new to have penetrated very deeply."
If a sampling of opinions from around the DM industry is any indication, American Healthways is unlikely to gain much immediate help in promoting the effort. At best, DM officials in a broad range of positions give American Healthways a pat on the back for taking on the exercise, and then start to illustrate either where the methodology goes wrong or tout their own alternatives — or both.
Some who never received an invitation to the California conclave are also in no mood to lend it unequivocal support.
"People were upset that they didn't include us and the rest of the industry," says Derek Newell, vice president for outcomes at LifeMasters. "We would have liked a more collaborative process."
The Hopkins methodology draws its sharpest criticism from those who say that it is still far too inexact in its pre/post approach to measuring outcomes. Without randomized, controlled trials — the gold standard for authenticating any health care product — there can be no undisputed evidence that a DM program works.
And in an imperfect world, many of the biggest DM players in the country are willing to settle for alternative methods that crudely demonstrate or imply bottom-line effectiveness. Also, DM companies still pursue radically different strategies for handling the chronically ill. The American Healthways method of evaluating entire populations may fit in with its own widely touted approach, but it is particularly controversial for outfits that specialize in working with specific disease groups.
"It gives us a starting place," offers Newell, "which I don't think we had before."
And Villagra, for one, thinks that that is a good sign for the future of a methodology that he remains as firmly committed to now as the day it was posted.
"As the industry matures," says Villagra, "my hope is that it sees the benefits of huddling around this document as a starting point, as the best and most explicit and most real-world-friendly method that is available anywhere. If that happens, then I think the methodology will stick."
Not to be outdone, the Disease Management Association of America (DMAA) has also issued a white paper on evaluating methods in disease management (see "The DMAA's Own Approach").
Converting the skeptical
It all started two years ago when Bob Stone, the executive vice president of American Healthways and 2003 president of the Disease Management Association of America, launched an ambitious effort to rally the DM industry around the flag of standardization.
American Healthways, as a major player, has "a responsibility to be constantly raising the bar and increasing the credibility of the entire industry," says Stone. "The skepticism around disease management outcomes is very real. It ranges from health plans to stories in the New York Times and Wall Street Journal. The Congressional Budget Office is skeptical.
"That skepticism is warranted and exists because disease management is a nice-sounding homogeneous term, but disease management is not homogeneous. There are different methods of delivery, different outcomes methodologies. From our perspective, there's no way the industry could address that skepticism without a uniform methodology."
The answer, Stone felt, could be provided by the Hopkins methodology. And, he adds, American Healthways has found support outside the ranks of its own company hierarchy.
"I think we are making progress," adds Stone. "I think we would have been naïve to assume that every disease management provider would immediately leap on a methodology that was not its own. You have to be able to crawl before you walk. Or run. It's better to have something than nothing. Now we have something. The most important aspect of the methodology is that it sets out a framework for the base period population, which is identified and tracked and measured in exactly the same way as the intervention period population. So there is no ability when you follow this methodology to inappropriately shift costs to make the results look better, no ability to subset the population, focusing only on a high-risk subset."
The lukewarm reception from the industry, he says, comes from the fact that the Hopkins methodology has the American Healthways name on it, and from the likelihood that there are "companies that if they use the methodology would not have reportable positive results and therefore didn't want to adopt it themselves."
Villagra thinks that there is more acceptance of the standard than most DM companies are willing to make public.
"I think there may be some reluctance on the part of some disease management companies to acknowledge this particular methodology as a standard," he says. "But I have been told that many people working in the field feel that the contents of the document are superb."
Health plans take notice
Signs of support: The National Business Coalition on Health has endorsed it, says Stone. The Disease Management Journal has published a document on Hopkins. And, he adds, "a number of health plans are at least making reference to this methodology. The methodology has been shared with the Centers for Medicare and Medicaid Services, but it remains to be seen to what extent it will find its way into requests for proposals coming out of the federal government."
Stone may never find many others in the industry cheering for the leadership of American Healthways in this matter, but he does gain approval for taking a very public step by posting the methodology on the company's Web site.
"To say this is a standard suggests that everything you need to know is known," says Barry Zajac, vice president for clinical informatics at Airlogix. And without some testing "in the real world," he adds, "that just can't be proven."
"It's a really good effort," offers Selecky as she cautiously weighs in, "but...."
For starters, she says, there are several issues that need to be taken into account when you do a pre-post evaluation, not the least of which are things like yearly changes in the mix of the population, changes in the physician network, changes in payments to providers, and the introduction of new medical technologies and treatments.
"It's a valid way of doing it," she continues, "but what we have seen is that there are several valid ways of doing it."
One of the most frequently leveled charges against pre/post is that any before-and-after comparison in DM suffers from a regression to the mean. Sick people have a habit of either getting better or dying, which gives DM programs a statistical advantage by showing a return on investment that isn't earned.
"Frankly, I think that the Hopkins methodology does recognize the regression-to-the-mean issue," says Selecky.
Anyone insisting on a rigidly exacting methodology demands a randomized control methodology over a long period.
"Until we do randomized, controlled trials," says Gordon Norman, MD, PacifiCare's vice president for disease management, "it won't be perfect."
The problem, of course, is that randomized, controlled trials take time and money, something a lot of customers don't feel they have nearly enough of. "But you also can't let the perfect be the enemy of the good," insists Norman. "If you have good data that indicates significant improvements, the best way forward now is to get quality DM programs in place and help the chronically ill and let scientifically tested industry standards take shape over time."
Too good to be true?
A threat to everyone, he adds, are programs that wildly overstate the returns they can deliver.
And that's why Stone got started.
Stone cited one DM company that was clearly stacking the deck in its favor. "All the cost of hospitalization went into the base period. And then patients were tracked individually." Stone calls that "gaming the system" — artificially inflating costs for chronically ill patients by including high-cost hospitalizations, and making the comparison costs of those in the program fraudulently low.
Often the numbers don't add up. Just listen to Al Lewis, says Stone. Lewis, the head of the Disease Management Purchasing Consortium, is always warning purchasers against companies that offer returns that are simply too good to be true.
Stone is less likely, though, to share Lewis's thoughts about the Hopkins methodology.
"The enemy of innovation"
Early on, says Lewis, he and Healthways shared many of the same notions on tracking outcomes.
"We both thought that these were the most valid ways of measuring outcomes," he says. "The consortium and Healthways were the first to have the outcomes measured validly."
But Hopkins, says Lewis, just doesn't measure up. Using data sets from two vendors that follow the Hopkins methodology results in "producing outcomes that don't pass the test of possibilities."
In one case the outcome was impossible and in another case the outcome showed a dramatic reduction in cardiac-related cases, whereas the incidence rates actually went up.
"I give Hopkins a huge A for effort here," says Lewis, "but this is an evolving science. Standards are the enemy of innovation. Standards imply that there is a gold standard. Anytime you're in an evolving industry, it's probably not a good idea to bet on any one standard. That's why we use two methodologies in the consortium's RFPs."
But Lewis isn't willing to reveal just what those standards are. "If a whole coalition of people got together, I would put mine out there," he explains, "but right now it's a benefit of membership in the Disease Management Purchasing Consortium."
By advising a large number of buyers on how to contract for DM, Lewis has considerable influence over standards. "If he suggests the methodology is a good idea, it'll show up in RFPs as a requirement," says Zajac.
Stone's response: Lewis can join him in the public spotlight any time he likes.
"I think the difference in perspective between the importance we see for this industry and what the consortium sees is reflected in the fact that we made ours public and he's chosen not to," says Stone. "One of the things that is in these documents is a form for interested parties to provide feedback. I'd be delighted to have feedback to look at, for inclusion in the next version. We meet with Hopkins all the time and when we have enough input or enough perspective to make changes warranted, we convene a steering committee."
As of now, though, there are no immediate plans to convene a new standardization conference — with Lewis or without him.
Like many in the business, Pacificare's Norman gives the Hopkins methodology high marks for relying on a set of general principles that are widely accepted in DM. "The general principles are probably indisputable," he says. "I think the methodology codified what the people who were doing the best work were already doing."
But there are also plenty of reasons why he wouldn't include the Hopkins methodology in one of Pacificare's RFPs without modifications.
In particular, Norman takes exception to Hopkins's reliance on applying a program to an entire population rather than narrowing it down to simply high-risk members.
Take that principle and apply it to all patients with diabetes, says Norman. "It makes sense from a societal point of view," he says, "but it doesn't necessarily make sense from a business point of view, or a health plan that has 17 percent membership turnover."
Customer always right
Customers want Pacificare to lower cost and control premiums, adds Norman. "They're not necessarily asking us to raise costs now to reduce expenses 10 years off. That's a laudable thing to do. It's an interesting societal debate. But customers don't want us to raise costs for anything" — which is one big reason why managed care companies have been migrating in ever-increasing numbers to DM.
It would be great for a customer to come along and commit to a 10-year contract so Pacificare could do all the things that work in the long run, he adds. But he isn't holding his breath until one comes along, either.
"The crisis of health care today in every survey I see is health care costs," says Norman. "It's 1990 all over again." Businesses want short-term strategies to cut costs and give them some alternative to passing on a greater share of costs to their workers.
So when Norman willingly settles for cruder standards of measurement to judge the outcomes for clients, the critical need is being served: DM, applied properly, works to save money. Just look at CHF, he says, the subject of pioneering DM programs that have been studied repeatedly.
"If we've gotten to the point that doing CHF in a variety of settings and methodologies has returns of 3 to 1 to 5 to 1," says Norman, "is there a need for more compelling proof beyond that? My answer is frankly 'no.'" But on the other hand, "if these results can be replicated by conscientious people using an imperfect but adequate methodology for business purposes, then after a while, you have to believe that ROI is there."
The moral to the story, says Norman: "Keep it real."
But experts are quick to add that the day is coming when some sort of uniform methodology becomes standard operating procedure — whether DM companies like it or not. The industry can take a big hand in creating that methodology, or watch the government impose one.
"In fact, the Medicare reform bill that was signed into last year does establish randomized control as the method of evaluation — at least for the first phase," says Selecky. Still, the primary concern for now is about what customers want and patients need.
Further study needed?
Says Selecky: "I'm not going to go to a customer and ram a methodology down his throat. This doesn't mean that at some point down the road, we won't be able to migrate toward a common approach, but that's going to require a lot of further study and education of the market."
Says Norman: "I frankly don't know if it will be any easier 10 years from now. We can't achieve the level of pristine purity that the New England Journal of Medicine would like to see in a peer-reviewed article, nor can we wait to leverage DM until that happens."
But the important goal, he says, is "doing the right thing for the patient." For now, at least, all signs indicate that "doing the right thing" will remain the single most influential — if still hazy — industry standard in the DM business.
The DMAA's Own Approach
Eager to clear up the confusion associated with measuring disease management programs experienced by planners and managers, the Disease Management Association of America has issued new guidance in a paper titled Evaluation Methods in Disease Management: Determining Program Effectiveness. The paper's overall theme is caution, however, as no matter which design planners choose, careful attention needs to be paid to the identification and control of potential biases that may invalidate the results.
Key biases to watch out for include selection bias, regression to the mean, and measurement error.
The paper suggests three methods of evaluation that can measure clinical and financial effects of a DM program while keeping away from statistical biases. The methods are: total population approach, survival analysis, and time series analysis.
Total population approach. This is the most widely used model used in evaluations and consists of a pre-test/post-test that addresses the clinical and financial performance of a DM program administered during a given time period, usually one year. Clinical and financial performance are measured during a comparable baseline period during which no DM program intervention occurred. Although most widely used, this method is a weak research and evaluation technique. The most basic limitation is that there is no control group to compare outcomes with.
Survival time analysis. A better approach than the total population design, this method takes into account the effect of enrollment and withdrawal patterns during the program's implementation.
Time series analysis. This approach controls for the effect of many biases, provides a timeline for changes in outcome patterns, and is more suitable for evaluating changes at the population level, which is what DM programs are intended to do.
The paper also addresses the concern that using cost as the outcome measure for the evaluation is inappropriate. Using cost as an outcome variable can expose the results of the evaluation to measurement bias.
One way to avoid this bias is to determine if a pre-established target is met. Each percentage reduction in utilization is worth a given payment. For example, each percentage point decrease in hospitalizations may be worth X dollars in bonus to the DM vendor above a set fee limit. Conversely, an increase in hospitalizations may lead to reimbursement of fees to the health plan or some other payment penalty.
More like this
- DM Vendors Start To Address Costs Created by Comorbidities
- 'Take My Word for It': The Enduring Dispute Over Measuring DM's Economic Value
- A Conversation With Bob Stone: How Time and Circumstance Changed Disease Management
- 12 DM Trends You Should Know About
- DM Industry Confident It Can Hit Medicare Goals
|3rd Annual Summit to Improve Adherence and Enhance Patient Engagement||Philadelphia, PA||March 9–10, 2015|
|Value-Added Solutions for Enhanced Customer Experience||Philadelphia, PA||March 9, 2015|
|National Healthcare CNO Summit 2015||Atlanta, GA||March 9–10, 2015|