There is a genuine difference between effectiveness and efficacy, even though the words are often used interchangeably. Effectiveness describes how medication is used in a real-world setting where patient populations and other variables cannot be controlled. Efficacy describes how a medication performs in an idealized or controlled setting — namely, a clinical trial. Just the same, some confusion exists when it comes to evaluating medications.
Of 38 health technology assessments conducted between 2005 and 2011 by agencies in English-speaking countries, 20 indicated that they were evaluating effectiveness. Of those, however, only one actually evaluated effectiveness. Meanwhile, 17 agencies stated they intended to measure efficacy and did so, according to Context Matters, a data analysis firm.
|Agency rate of discrepancy|
Discrepancy between stated objectives and actual analysis
|AHRQ (U.S.)||0%||(0 of 6 reviews)|
|DERP (Oregon Health & Science University, U.S.)||29%||(2 of 7 reviews)|
|CADTH (Canada)||50%||(5 of 10 reviews)|
|HIS (Scotland)||100%||(5 of 5 reviews)|
|NICE (United Kingdom)||80%||(8 of 10 reviews)|
The agencies stated that the measured frequencies of usage of efficacy and effectiveness were in the same ballpark — 45 percent and 55 percent, respectively. But the agencies actually measured effectiveness much less often — just 3 percent of the time, according to the analysis, which was presented recently at a conference of the International Society of Pharmacoeconomics and Outcomes Research in Berlin.
In explaining its findings, Context Matters notes that payment decisions are usually based on health technology assessments, but that effectiveness is often assumed when the evidence presented for consideration is actually based on clinical trials, not real-world circumstances. In other words, this is more than semantics.
“This poses an interesting challenge for agencies that approve drugs for payment to be used in real-world circumstances and not under the confines of randomized control trials,” Context Matters wrote in its poster that was presented at the conference. “While the lack of evidence available at the time of an assessment may result in only efficacy being evaluated, the misuse of critical terminology may be misleading.
“This is an obvious barrier to clear communication, but the implications might be broader. If this is an indication of a more widespread misuse of critical terminology, then researchers, policymakers and other readers may be misunderstanding the implications of comparative effectiveness and health technology assessment reviews.”
Why this is happening is difficult to discern. Context Matters Chief Executive Officer Yin Ho, a co-author of the study, speculates that the distinction became blurred because of lack of awareness and sloppiness. Whatever the reason, the mix-up is real and, she warns, needs to be fixed if payment decisions are going to be meaningful.
The firm analyzed reviews made by the Canadian Agency for Drugs and Technologies in Health; the United Kingdom’s National Institute for Health and Clinical Excellence; the Agency for Healthcare Research and Quality in the United States; Healthcare Improvement Scotland, a division of the United Kingdom’s National Health Service; Germany’s Institute for Quality and Efficiency in Healthcare; and the Drug Effectiveness Review Project at Oregon Health & Science University.