If one fifth of implantable cardioverter defibrillators (ICDs) are being put in the wrong people, as a recently published study claims, then the potential savings from tightening up guidelines and coverage policies would seem straightforward. But the study that reported that too many ICDs were being improperly used fueled a still-smoldering debate among cardiologists and, according to some researchers, has helped to show that the clinical studies that health insurers rely on to determine their coverage policies may not always be what they seem.
Duke University researcher Sana Al-Khatib, MD, set off a firestorm earlier this year with her article in the Journal of the American Medical Association that reported that physicians weren’t following evidence-based guidelines in 22.5 percent of ICD implantations. Soon afterward, the American College of Cardiology (ACC) and the Heart Rhythm Society (HRS) issued a statement defending the medical evidence behind ICD use. Meanwhile, health plans have stood pat with their policies for covering ICDs. And now Al-Khatib herself has stopped giving interviews on the subject.
“One in five patients who received a primary prevention ICD may not have needed the device,” she says in a written statement from the Duke Clinical Research Institute. “Our study highlights the importance of educating health care providers about appropriate indications for ICD therapy and highlights the importance of enhancing health care practitioners’ adherence to evidence-based practices.” While most of the 111,707 study subjects had Medicare, about a third (33,957) had commercial insurance.
For health plans, putting an expensive device such as an ICD in the wrong people can have serious cost consequences, especially in light of near consensus among clinicians and researchers that a large percentage of those who truly need the device may not be getting it anyway. In Al-Khatib’s study, about 7,200 commercial insurance patients got inappropriate ICDs — at a cost of $200 million, based on Medicare per-procedure payments.
An ICD is a small device placed in the chest or abdomen that uses electrical pulses to control an irregular heartbeat, but it is not a pacemaker.
An ICD can give both high- and low-energy electrical pulses to correct arrhythmias in the heart’s lower chambers, most commonly in the left ventricle, whereas a pacemaker gives only low-energy electrical pulses to correct irregular heartbeats.
ICDs are typically for people at risk of life-threatening arrhythmias or who have had a heart attack that has damaged the heart’s electrical system, according to the National Heart Lung and Blood Institute (NHLBI).
Most common problem
The NHLBI describes the low-energy pulses that an ICD delivers as a flutter, whereas the high- energy pulses feel like “a thumping or a painful kick” in the chest. The most common problem with ICDs — besides the risk of infection related to the implantation surgery — is that they can sometimes give pulses that aren’t needed. Doctors can reprogram ICDs or prescribe medications so the electrical pulses occur less often.
American Heart Association and ACC clinical guidelines state that the devices are not appropriate for patients who have had a recent heart attack, who have a recent diagnosis of heart failure, who have severe heart failure, or who have recently had coronary artery bypass surgery. Al-Khatib’s study used those guidelines, among others, in determining appropriate use. In a written statement Al-Khatib notes that the 22.5 percent of inappropriate ICD implants is excessive even when considering that not all patients clearly fit into clinical guidelines.
According to the Duke study, 36 percent of those who received the device but did not meet the guideline criteria had had a heart attack within 40 days, and 62 percent had a recent heart failure diagnosis. Patients who had non-evidence-based ICD implants were significantly more likely to have complications and had much higher rates of death in the hospital.
Soon after JAMA published the study, the ACC and HRS responded that “the vast majority of implanting physicians are prescribing ICDs with the confidence that they are providing the best care for their patients.” Bruce Wilkoff, MD, president of HRS and a cardiologist at Cleveland Clinic, says he has found several possible flaws in Al-Khatib’s research, not the least of which is that it relied on several sets of clinical guidelines that were revised over the study’s four-year span. “That is not to say there was not too much variability in the usage of these guidelines, and clearly this does bring up something that we should be looking into, but I would be very careful about saying 1 in 5 ICDs are not appropriately applied,” he says.
Wilkoff gives one example of when strict adherence to clinical guidelines may not make sense. Waiting 90 days until after heart bypass surgery to implant an ICD may not be practical if a patient develops complete heart block — failure of the ventricles to receive electrical impulses — and needs a pacemaker. “It would be inappropriate to put in the pacemaker and then wait the 90 days and operate on the patient again to put in a defibrillator.”
Not always straightforward
But clinical guidelines are not always straightforward when it comes to ICDs, Wilkoff says. “There are different kinds of guidelines: There are implantation guidelines, society guidelines, and payment guidelines, and they’re not the same,” he says. “A procedure may be medically appropriate but the physician may not have the approval from the plan to bill for it.” Nonetheless, he acknowledges that the study does point to a need for some oversight.
How plans measure up
Health plans have taken their own steps to ensure that ICDs are implanted appropriately. According to Al-Khatib’s study, all commercial insurers, including HMOs, had slightly lower rates of non-evidence-based implants — 21.65 percent and 21.3 percent respectively — than government plans. UnitedHealth uses the ICD Registry, a national database developed by the ACC and HRS, as a centerpiece of its program to designate premium cardiac specialty centers. Anthem rewards centers that meet quality measures for cardiac care with a Blue Distinction designation.
Plans rely on published clinical research to write their coverage policies for procedures, especially with regard to ICDs and other expensive medical technology. Douglas Hadley, MD, head of Cigna’s coverage policy unit, asserts that the medical evidence on ICDs is actually better than for most devices. “That doesn’t mean it has answered every question,” he says, “but in terms of trying to define those individuals for whom we have high certainty that implanting the ICD will improve their outcome, we know a lot of subpopulations for whom these are entirely appropriate to implant.”
Cigna uses a committee comprising medical directors from various specialties to write coverage policies, and they pore over clinical studies to make coverage determinations, Hadley says. “Where possible we like to look at controlled trials, we like to look at comparative effectiveness studies, and you’ve got both of these for this particular technology,” he says.
Gillian Sanders, MD, a Duke Clinical Research Institute colleague of Al-Khatib and co-author of the JAMA ICD study, exposed the variability of clinical research on ICDs in a 2005 article in the New England Journal of Medicine. That study looked at eight clinical trials of ICDs to evaluate the cost effectiveness of ICDs for preventing fatal heart attacks.
Less effective, more expensive
“Of the eight trials, two found no mortality benefits, so the use of an ICD was less effective and more expensive than for patients who did not receive an ICD,” Sanders says. “We found that the other six trials added significantly to life expectancy but at an increased cost, and that compared to controls the cost of this additional gain in life expectancy was in the range of commonly accepted cost-effectiveness values.”
Adds Cigna’s Hadley: “We should recognize that there are people out there who would benefit from these devices and are not getting them. Part of the coverage policy tries to help balance that underutilization with overutilization.”
John Wilson, MD, an independent researcher and former director of the heart failure program at Vanderbilt Medical Center in Nashville, has criticized the way that ICD trials are reported in the research literature, with his most recent article appearing this year in the American Journal of Cardiology. “There have been other papers that looked at the incidence of complications in people with devices being implanted, specifically focusing on the Medicare population, finding that the incidence of complications is higher than those reported in trials and higher than expected,” he says. “That would not surprise too many people because the trials were very selective in whom they enroll, whereas the Medicare population tends to be a more generic population and would probably have more risk factors.”
In his research, Wilson looked at how 10 major ICD trials were reported in the medical literature. These reports emphasized ICD benefits while underreporting ICD complications. “What was remarkable about these papers was that not only did the writers not generally provide information about complications in any consistent, comprehensive manner, but in discussion sections there was a remarkable lack of any attention to risk — period. That is pretty striking when you consider that the risks are rather substantial.”
Wilson cautions plan managers to pay more attention to ICD complications, not only from potentially improper use but also when the use is in line with clinical guidelines. Wilson notes that for every trial that supports the efficacy of ICDs, another trial raises concerns about the therapy. “For the largest trial we analyzed, ICD therapy improved the survival of 7 percent of patients at 5 years. However, clinically significant complications occurred in 5 percent of patients at the time of ICD implantation and in 9 percent of patients after implantation. Therefore, the use of ICD therapy in 100 patients actually did not benefit 93 patients and resulted in significant complications in 14 patients,” he says.
He also advises plan managers to be mindful of how clinical studies use framing to advance a specific point of view.
Late last year, researchers in Rome called for new trials of the preventive use of ICDs in seniors because five studies that cardiologists consider to be seminal trials were skewed toward younger patients, even though seniors get most of the devices. Their article, published in the Annals of Internal Medicine, noted that four of the trials did not even sort mortality data by age, and claimed the data do not conclusively show that ICD therapy improves survival in elderly patients with severe left ventricular dysfunction.
Use of an ICD is projected to add 1–3 quality-adjusted life years (QALY) at a cost of $68,300 to $101,500 per patient, compared with drug therapy, according to Sanders’s 2005 study.
However, Sanders has a warning about the overall costs of ICDs. “It would cost more than several billion dollars annually to implant ICDs in patients for whom they have been proven effective and are considered cost effective,” she says. “If physicians extend the use of ICDs in lower-risk patients where the cost-effectiveness is even less favorable, the societal costs will increase even more.”
Clinical guidelines regarding ICDs are not always clear, and plans may need to oversee use more energetically.
Studies about ICDs tend to overlook the substantial risks involved in implanting the devices, argues the independent researcher John Wilson, MD.