Managed Care
Disease
Management

Research Topics Underpin Comparative Effectiveness

MANAGED CARE November 2009. © MediMedia USA

Research Topics Underpin Comparative Effectiveness

The government committee charged with helping health plans and providers choose best treatments suggests 100 areas of interest
Frank Diamond
Managing Editor
MANAGED CARE November 2009. ©MediMedia USA

The government committee charged with helping health plans and providers choose best treatments suggests 100 areas of interest

Frank Diamond

Managing Editor

One could say that the government, in its backing of comparative effectiveness research, wants to compare apples and oranges. The Committee on Comparative Effectiveness Research Prioritization released 100 topics that it says should be the focus of research. The very first topic illustrates just what those behind the CER push hope to accomplish. “Compare the effectiveness of treatment strategies for atrial fibrillation including surgery, catheter ablation, and pharmacologic treatment.”

“What this means,” says I. Steven Udvarhelyi, MD, senior vice president and chief medical officer at Independence Blue Cross, “is that the experts that sat on the panel with me looking at the available research said that we do not have enough information that tells us the relative effectiveness. Do we know that surgery works? Yes we do. Do we know that catheter ablation works? Yes we do. Do we know that drug treatment works? Yes. But do we know which subset of patients each one of those is best suited to, and what is the differential effectiveness? No, we don’t.”

Udvarhelyi was one of two health plan executives (the other was George J. Isham, MD, medical director and chief health officer at HealthPartners) who sat on the committee, which seeks to allocate the $1.1 billion the government set aside for CER under the American Recovery and Reinvestment Act of 2009, a.k.a. the stimulus program.

The idea is for all stakeholders to know what works better. “It could be a drug versus another drug,” says Udvarhelyi (pronounced “ewd ver hi”). “It could be a drug versus a non-drug intervention. It could be two surgical interventions. It could be a medical intervention versus a surgical intervention. It could be looking at different ways to disseminate information and knowledge to patients. How do you make care more effective? It’s not that it’s not happening today. It’s just not happening on a broad enough scale.”

Definition

The committee, under the auspices of the Institute of Medicine, defines CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.”

The topics were divided into four parts, and topics in the first quarter were given a higher priority than those in the other three. It comes down to knowing what works.

“...[I]nnumerable practical decisions facing patients and doctors every day do not rest on a solid foundation of knowledge about what constitutes the best choice of care,” says the committee report, Initial National Priorities for Comparative Effectiveness Research. “One consequence of this uncertainty is that highly similar patients experience widely varying treatment in different settings, and these patients cannot all be receiving the best care.” (For the report, go to http://www.nap.edu/catalog.php?record_id=12648.)

Take this topic, for instance: “Compare the effectiveness of management strategies for localized prostate cancer (e.g., active surveillance, radical prostatectomy [conventional, robotic, and laparoscopic], and radiotherapy [conformal, brachytherapy, proton-beam, and intensity-modulated radiotherapy]) on survival, recurrence, side effects, quality of life, and costs.”

No data

Sheldon Greenfield, MD, former editor of Annals of Internal Medicine and cochairman of the committee, says, “there are five or six major modalities for prostate cancer out there. There has never been a head-to-head comparison. The primary care physicians might tell you the treatment choices, but without any data to support it.”

Advisory board

The committee wants the Department of Health and Human Services to create an advisory board that will oversee a public-private enterprise. The study states: “To ensure research activities that truly embrace the definition of CER, the ARRA funds — and subsequent funding to support CER — should flow through a CER coordinating authority directly to grantees, through federal agencies, or both.”

Greenfield: “No one government agency really has the breadth of study designs or approaches that is needed to cover these topics. But on the other hand, CMS couldn’t make a recommendation about whether there should be a separate independent group, although many people felt that way, or whether it should be housed in AHRQ or some other place.

“Without sort of a central oversight group at the federal level, it is uncertain how these things are going to proceed. Right now they are proceeding through each individual agency. AHRQ specializes in systematic review of the literature and big databases, whereas NIH does trials. So each government agency does its own thing.”

Research organizations that are funded by health insurers, such as the Blue Cross & Blue Shield Association’s Technology Evaluation Center in Chicago, should certainly consider tapping some of the federal funding, says Udvarhelyi. Generally, though, health plans will most likely want to be the happy end-users of the data. “I think health plans have a vested interest in seeing this expanded research be successful.”

Info source matters

Greenfield says that health plan exuberance will depend somewhat on where the data are mined. “Do the HMOs want to participate?” says Greenfield. “Well, maybe they do and maybe they don’t. Let’s say a bunch of HMOs got together, United and a bunch of the Blues, and Kaiser and so forth, and they say, We’ll participate with researchers who do this from our own data and we’ll make sure that it is done right, which is critical. Also, since some of it came from our own kinds of patients, we’ll know more about them.”

Greenfield cites the California Health Benefits Review Program (CHBRP), an organization established under the University of California by the California State Legislature in 2002. By law it is funded by health insurance plans. The money is collected by the two regulatory bodies, the Department of Insurance and the Department of Managed Health Care, and then goes through the Comptroller’s Office to the University of California. The funding primarily goes to a task force made up of faculty from the University of California campuses that have medical centers and schools of public health. The CHBRP effort also includes the three private universities in California with medical centers.

“It happens to be about one subcategory, which is legislative mandates. But the principle is there.” In other words, says Greenfield, HMOs have long recognized that it is in their interest that an unbiased group examines, compares, and then rates different treatments.

Critical point

“They get better data, which is the critical point,” says Greenfield. “It’s an unbiased group with the best kind of study designs and the best kind of expertise. It just helps the plans make decisions. They don’t necessarily have to make them all themselves based on just about nothing.” (For more on how managed care reacts to CER, see our April cover story Comparative Effectiveness: An Idea Whose Time Has Finally Come).

The memories of backlash because of denials of care are still fresh for many health plan medical directors. Conceivably, comparative effectiveness research will take most of the onus off.

“It can’t help but help medical directors because it will provide the kind of information that will preempt them getting clobbered,” says Greenfield.They can point to the fact that the data come from their own patients, much as plans such as Kaiser Permanente and Geisinger Health Plan do now in their efforts. “They point out that the data doesn’t come from the VA or any other population far removed from the people they serve. The people they serve are part of the data pool.”

Covering the bases?

The Committee on Comparative Effectiveness Research Prioritization had thousands of potential topics to choose from in coming up with the final 100, an arbitrary number. “It actually started out 50 and we thought that wasn’t quite fair,” says Greenfield. “We wanted to make sure that a lot of bases were covered.”

The topics answer a fundamental question, says Udvarhelyi. “If you are at the clinical level thinking about which of a variety of options are best for patients, do you have the information available to you and the patient to understand the pros and cons and, literally, the comparative effectiveness of each of those options to make that decision?”

Knowing what works could revolutionize care

The Committee on Comparative Effectiveness Research Prioritization wants the government’s $1.1 billion CER effort to determine what treatments are best suited for patients suffering from a myriad of maladies. The report states that most of the committee’s time “was spent developing a process for priority setting, eliciting a wide array of input from the public, and deliberating over a list of nominated research topics.... The individual topics are grouped into quartiles according to the number of votes each received during the committee’s voting process. Topics within the first quartile were considered higher priority than those in the fourth quartile, but the order within quartiles does not signify rank.”

Here is the first section.

Highest priorities for research

  • Compare the effectiveness of treatment strategies for atrial fibrillation, including surgery, catheter ablation, and pharmacologic treatment.
  • Compare the effectiveness of the different treatments (e.g., assistive listening devices, cochlear implants, electric-acoustic devices, habilitation and rehabilitation methods [auditory/oral, sign language, and total communication]) for hearing loss in children and adults, especially people with diverse cultural, language, medical, and developmental backgrounds.
  • Compare the effectiveness of primary prevention methods, such as exercise and balance training, versus clinical treatments in preventing falls in older adults at varying degrees of risk.
  • Compare the effectiveness of upper endoscopy utilization and frequency for patients with gastroesophageal reflux disease on morbidity, quality of life, and diagnosis of esophageal adenocarcinoma.
  • Compare the effectiveness of dissemination and translation techniques to facilitate the use of CER by patients, clinicians, payers, and others.
  • Compare the effectiveness of comprehensive care coordination programs, such as the medical home, and usual care in managing children and adults with severe chronic disease, especially in populations with known health disparities.
  • Compare the effectiveness of different strategies of introducing biologics into the treatment algorithm for inflammatory diseases, including Crohn’s disease, ulcerative colitis, rheumatoid arthritis, and psoriatic arthritis.
  • Compare the effectiveness of various screening, prophylaxis, and treatment interventions in eradicating methicillin resistant Staphylococcus aureus (MRSA) in communities, institutions, and hospitals.
  • Compare the effectiveness of strategies (e.g., bio-patches, reducing central line entry, chlorhexidine for all line entries, antibiotic impregnated catheters, treating all line entries by way of a sterile field) for reducing health care associated infections (HAI), including catheter-associated bloodstream infection, ventilator associated pneumonia, and surgical site infections in children and adults.
  • Compare the effectiveness of management strategies for localized prostate cancer (e.g., active surveillance, radical prostatectomy [conventional, robotic, and laparoscopic], and radiotherapy [conformal, brachytherapy, proton-beam, and intensity-modulated radiotherapy]) on survival, recurrence, side effects, quality of life, and costs.
  • Establish a prospective registry to compare the effectiveness of treatment strategies for low back pain without neurological deficit or spinal deformity.
  • Compare the effectiveness and costs of alternative detection and management strategies (e.g., pharmacologic treatment, social/family support, combined pharmacologic and social/family support) for dementia in community-dwelling people and their caregivers.
  • Compare the effectiveness of pharmacologic and non-pharmacologic treatments in managing behavioral disorders in people with Alzheimer’s disease and other dementias in home and institutional settings.
  • Compare the effectiveness of school-based interventions involving meal programs, vending machines, and physical education, at different levels of intensity, in preventing and treating overweight and obesity in children and adolescents.
  • Compare the effectiveness of various strategies (e.g., clinical interventions, selected social interventions [such as improving the built environment in communities — such as manmade landscapes or buildings — and making healthy foods more available], combined clinical and social interventions) to prevent obesity, hypertension, diabetes, and heart disease in at-risk populations such as the urban poor and American Indians.
  • Compare the effectiveness of management strategies for ductal carcinoma in situ (DCIS).
  • Compare the effectiveness of imaging technologies in diagnosing, staging, and monitoring patients with cancer including positron emission tomography (PET), magnetic resonance imaging (MRI), and computed tomography (CT).
  • Compare the effectiveness of genetic and biomarker testing and usual care in preventing and treating breast, colorectal, prostate, lung, and ovarian cancer, and possibly other clinical conditions for which promising biomarkers exist.
  • Compare the effectiveness of the various delivery models (e.g., primary care, dental offices, schools, mobile vans) in preventing dental caries in children.
  • Compare the effectiveness of various primary care treatment strategies (e.g., symptom management, cognitive behavior therapy, biofeedback, social skills, educator/teacher training, parent training, pharmacologic treatment) for attention deficit hyperactivity disorder (ADHD) in children.
  • Compare the effectiveness of wraparound home and community-based services and residential treatment in managing serious emotional disorders in children and adults.
  • Compare the effectiveness of interventions (e.g., community-based multi-level interventions, simple health education, usual care) to reduce health disparities in cardiovascular disease, diabetes, cancer, musculoskeletal diseases, and birth outcomes.
  • Compare the effectiveness of literacy-sensitive disease management programs and usual care in reducing disparities in children and adults with low literacy and chronic disease (e.g., heart disease).
  • Compare the effectiveness of clinical interventions (e.g., prenatal care, nutritional counseling, smoking cessation, substance abuse treatment, and combinations of these interventions) to reduce incidences of infant mortality, pre-term births, and low birth rates, especially among African American women.
  • Compare the effectiveness of innovative strategies for preventing unintended pregnancies (e.g., over-the counter access to oral contraceptives or other hormonal methods, expanding access to long-acting methods for young women, providing free contraceptive methods at public clinics, pharmacies, or other locations).

“Health plans have a vested interest in seeing this expanded research be successful,” says I. Steven Udvarhelyi, MD, of Independence Blue Cross.

“Do the health plans want to participate?” asks Sheldon Greenfield, MD, co-chairman of the committee that collected the topics for research. “Well, maybe they do and maybe they don’t.”