A Conversation with Vinay Prasad, MD: To ‘First, Do No Harm,’ You Must Start With Good Evidence

The intrepid young oncologist’s criticism of cancer screening and surrogate endpoints has stirred up controversy. He says he just wants drugmakers, others to meet high standards of evidence.

Interview by Peter Wehrwein

Vinay Prasad, MD, a 33-year-old oncologist at the Oregon Health and Science University who specializes in lymphoma, has written critically—and prolifically—about everything from the state of the evidence for cancer screening to surrogate endpoints in clinical trials to the misleading language used to describe cancer research. His opinion pieces and research papers in Nature, JAMA, BMJ, and other prestigious journals have generated sometimes heated debate—and many “reply” entries among his Pubmed citations. With Adam Cifu, MD, he wrote Ending Medical Reversal (Johns Hopkins University Press, 2015), a book that argues that many medications, devices, and other medical interventions are adopted based on poor evidence. Prasad spoke with us recently by phone from his office in Portland, Ore.

I gather you’re not too popular in the oncology community. You have a lot of thoughts that are skeptical and throw cold water on some current enthusiasms.

You could say, certainly in the oncology community; probably the medical community. I did write something recently that kind of pushed the issue of precision medicine in Nature. I made a joke today that I got many, many emails in reply, and they were either one of two variety: Thank you for writing a great article, or curse you for writing the worst article.

Vinay Prasad, MD

How do you respond to being a person who stirs up controversy? Does it excite you?

No, I neither seek it out, nor do I avoid it. I just think that if people who read my book and the papers we publish, they’ll see it’s a fairly consistent viewpoint being applied to many different situations, and at the end of the day, that’s what happens when you apply any sort of a consistent viewpoint to some different examples.

What’s that consistent viewpoint that you just referred to?

I think the consistent viewpoint is that we should recommend medical therapies to patients that, on average, do more good than harm. By “good” I mean things that matter to people, which is living longer, living better. That’s the consistent viewpoint.

And then the controversies that arise are about what’s the level of evidence you need before you recommend a practice. I think when you take the long view of medicine, you tend to conclude that you really want better evidence than what is offered a lot of times, because we’ve been burned before when we didn’t ask for better evidence. And, we’ve harmed people before when we didn’t ask for better evidence.

We want medicine that makes people better off, and the question is, is specific practice X doing that? I think often the sobering answer is no, or we don’t know.

Evidence-based medicine got started 15, 20 years ago. We’ve had an idea and an agenda that’s supposed to address your concern about things entering clinical practice that aren’t proven. Why has it been such a failure?

It’s been a success in one way, which is that, for many people, it’s been a very influential idea that is contrary to ideas in every other period of time in all of medicine, where the justification for an intervention was some mechanistic rationale of why it might work or some endorsement by a prominent figure. We finally entered an era where we realized the best endorsement of a medical practice is some proof that it actually does what you think it does.

A lot of people benefit from the lower bar. If you’re making something that’s a marginal product, or maybe doesn’t work at all, you can still get it to market.

But then, the question is, “Why has it been unsuccessful in many ways?” I think it’s been unsuccessful in that there’s been resistance from people who like the old method, which is that it doesn’t matter if it’s been proven to work; it just matters that it’s plausible that it does work. That’s good enough. I think those people still exist.

A lot of people benefit from the lower bar. If you’re making something that’s a marginal product, or maybe doesn’t even work at all, you can still get it to market. So you can still make a lot of money if people don’t really ask you for proof that it works. That’s one.

Two, it’s tedious. It’s not glamorous to develop evidence. Three, evidence sometimes doesn’t apply to situations where you have patients with a unique set of problems in front of you, and it’s nice to be able to sort of, you know, make things up on the spot. Sort of improvise. And I think to some degree, we all do that. But I think the less attention we pay to the evidence, the psychologically easier it is to do that, perhaps, and the more confident you are that your extrapolations are correct.

The problem with screening is the endpoints that have been met in trials are not the endpoints we wish we had met, and we confuse the two of them.

I’d like to discuss two areas that you write about in Ending Medical Reversal. One is screening, and the other is surrogate endpoints. It seems to me that in describing medical reversals, it can often be chalked up to research in which the control group isn’t really a control group, that we needed to do sham surgery. Or the endpoint was subjective. But screening seems to have its own dynamic.

I think you put it well. It has its own dynamic. The problem with screening is the endpoints that have been met in trials are not the endpoints we wish we had met, and we confuse the two of them.

With screening, we’re taking a lot of healthy people—they’re not patients, they’re healthy people—they come to you, and as a doctor, as a health care provider, and they say, “I feel great and I’m doing fine; is there anything that you have that could make me better off?” And screening tests are an answer to that question. But the truth is, we don’t really have good evidence that they actually do make you better off. We make a lot of assumptions, and we kind of fool ourselves into thinking that they do.

There are two types of cancer screenings. There are the ones that have some evidence, and there are the ones that have absolutely no evidence. And both are actually used a lot in practice. You know, CA-125 is used to screen for ovarian cancer, and that’s something that’s failed in multiple randomized trials, even to decrease dying from ovarian cancer. Forget about patients living longer, living better.

Even for the screening tests that do decrease the rate of dying from that cancer, we point out that that’s technically not really what you care about. When you get your sigmoidoscopy, you are doing it to decrease your rate of dying from colon cancer, yes. But that’s because you think that’s going to increase your life expectancy overall, or decrease your morbidity. And the truth is we don’t actually have proof that it does these last two things.

So screening is such a hot mess of a health care issue. Do you think there are ways out of it? Do you think doing some risk stratification would help?

I think that mammography, prostate cancer screening—they really are hot messes. I mean, they’re public health disasters. And your question is good, which is that, why, when we embarked on the screening thing, did we ever have this idea that the screening test has to be one-size-fits-all for everybody? I think that was kind of a foolish idea. We should be looking at this first in very high-risk people. You know, the things we know are high risk–family history and certain risk factors. This is a general principle of medicine, which is that if you have a new drug, like a statin medication, the first randomized trial of statins was not statins in the tap water. The first trial was to people who had a heart attack with very, very high cholesterol. They picked very high-risk people and they proved mortality benefits.

In screening, that’s how we should have done it. We should have said, pick people at very high risk of breast cancer and screen them and show mortality benefit. And then, expand. But we didn’t do it that way. We always tried to get everyone with one fell swoop, and that’s completely the wrong strategy. And it’s unprecedented in medicine to want to do that.

Do you think that using genetic testing to assess vulnerability could be the ultimate risk factor stratifier, maybe the thing that solves the cancer screening problem?

You’re on the right line of thinking, which is that maybe there are some prognostic factors that may be so predictive that they really do rid us of this dilemma of overdiagnoses, overtreatment, all those things.

I would say that genetics may continue to identify small groups of people. Right now we have a few germline mutations in, for example, the BRCA genes, where we know that they result in such a high risk that women benefit from things like prophylactic removal of the ovaries. So, we have a couple of examples. I’m open to more. I would be willing to consider evidence if somebody did come up with a gene signature that could risk stratify. But if I had to bet money on that, I would bet that it probably will be not the perfect risk tool because a lot of these common cancers tend to have multigenetic causes: Many, many genetic factors may slightly increase or decrease risk. So my gut instinct is that it won’t be a perfect solution. But I do think avenues of investigation along those lines should be pursued, and it’s the right way to think about screening and solving this problem.

If you were an insurer, what would you do about all this screening? Would you stop paying for it?

There are many things in medicine that I would stop paying for. Screening is probably the one issue that I would be reluctant to use that kind of blunt-force tactic because of the extreme emotions it generates. If I were an insurer I would actually pay for a randomized trial of screening. Get the people who are on the fence—and there are many—and ask them to enroll in the randomized study. And the study has potential to really answer this question in a very definitive way and probably change future practice.

I would also mandate that healthy people are told the hard facts about screening: No, it has not been shown to improve overall mortality, and details of overdiagnosis and false positives—an honest discussion alone will likely result in many people choosing not to have it.

I want to change the subject to surrogate endpoints. There is a general discussion about how surrogate endpoints can put us on the path to ineffective interventions. You end up treating the lab value rather than the clinical condition. But with cancer, there’s an argument that without surrogate endpoints, with people living longer with cancer, to get the definitive survival data that you would need for a statistically significant result, that takes years. In the meantime, people might be literally dying because they don’t have access to this therapy. Or dying sooner, to be more precise.

I hear that point a lot. Does waiting for survival take a very long time? That’s built into this question. That’s actually, to some degree—I hate to say it—it’s kind of a fallacy. And I’ll tell you why. The time it takes to generate a statistically positive result is, in some way, contingent on the rate of the event, which is death in this case. And, it is contingent on the rate of death; the longer that takes to happen, the longer it takes to generate the result. But it is also contingent on the sample size, and many modern trials have huge sample sizes.

We’ve looked at many of the oncology trials, and these trials that are being run in oncology are huge trials. We’re running oncology trials that are so powerful, they can detect survival benefits in a very short period of time. They are often overpowered to detect trivial differences.

The other thing this is all contingent upon is which indications do you develop cancer drugs for first? I think it is a mistake to think the manufacturers’ intent is to bring the drug to market as fast as possible. Manufacturers’ incentive is to bring the drug to market with as large a market share as possible. And in doing so, they often make decisions to chase a surrogate, even if that takes a longer period of time for a large indication. We have some examples of drugs that could have been brought to market even faster if we went for second rather than first line in a particular cancer.

So where should we allow surrogate endpoints? I think we should allow them in cases that are dire, where without a drug approved on a surrogate, people will pass away in a short period of time. I’m okay with them when there are a few other treatment options, for people who have very few options, like melanoma maybe five, six years ago. That’s where you want an accelerated approval. There’s literally nothing else we have to give people with that condition.

I also think there has to be a yin to the yang. There should be post-marketing commitments in randomized trials so they actually do what we think they do. And those should be enforced. And we’ve done studies where we show that with about five years of follow-up of 36 drugs approved based on a surrogate, only five later improved survival. Currently, we are failing at enforcing these commitments.

Is that what you are saying in your recent paper in Mayo Clinic Proceedings?

That was in JAMA Internal Medicine a year ago. In the Mayo paper, we go even further. We show that many of these drugs getting approved on surrogate endpoints are getting full approval, meaning there’s no post-marketing efficacy commitment. And when they’re giving full approval, the FDA’s own regulatory language says, “We will do that only when we have proof that a surrogate is ‘established.’ And we show that in a big chunk—37% of those cases—these established surrogates had no data that have ever even looked to see if they correlate at all with survival, let alone being established. There’s no study on the topic.

So you want those approvals to be contingent upon post-market studies that will test the proposition whether that surrogate marker is, in fact, related to survival or quality of life.

Yes. And I would just add to that I want those post-marketing randomized trials to be done in a timely fashion.

I would also say that I want the accelerated approval to be given in the right settings, where things are dire, rare, and there are few other options. Not when there’s 83 other regimens approved for the treatment. And in fact, we have some unpublished data where we show that is, in fact, the case. People call something an unmet medical need, and you look and there’s 80 different treatments. On what planet is that an unmet medical need?

What do you think of the Right to Try laws?

I think they’re terrible. They’re disingenuous. They’re written by people who want to weaken the FDA. That’s their ultimate purpose. They’re written under the guise of helping patients, but they do no such thing. In fact, FDA data shows that 99% of all requests for compassionate use are granted by the FDA, but very few requests are actually granted by companies. The companies are the barriers. Companies don’t want to give you their experimental drug so that this person can take it and have some side effect and then kill the whole drug development pipeline. That’s the real barrier.

There’s a lot of enthusiasm about cancer treatment. Do you think we’re in a bubble that’s going to burst?

We’re absolutely in a bubble. We have this huge amount of years of life lost because of cancer. It’s a horrible problem. It’s probably, by some measurements, worse than cardiovascular disease. People always say like, death from cardiovascular disease is greater, but that’s not the right metric. The metric is, you know, cancer is killing people at untimely ages. 40-year-old people. That’s a huge loss of years of life.

I mean, the moonshot is right. We need to actually, as a society, do something about this problem. And I believe with the proper moonshot, we might be able to do something about it. But this is a joke of a moonshot. I call it a puddle jump. I mean, this is, you know, $700 million when you’ve been spending $5 billion a year, year after year. That’s not even on the right scale.

And all of these things that get touted in the moonshot, we already know they’re not going to bend these cancer statistics markedly. Immunotherapy. Immunotherapy has 20% to 30% response rates in a handful of cancers. If you made a pie of all the cancers that are killing people, and you highlighted all of the cancer indications that are amenable to immunotherapy, you’ll have a piece of pie that is too small to serve in a restaurant. I mean, it’s a sliver of the pie.

You’re critical of screening; you think many drug approvals are based on misleading surrogate markers; the excitement about immunotherapy you think is misplaced. So where would you put our efforts?

I think the one thing that has to be done is an unconflicted clinical trials agenda. We have cooperative groups doing a few important randomized trials, but we have almost nobody setting a vigorous clinical trials agenda, where we compare, you know, the things we have in different dosing, different schedules, different strategies.

Right now, 9% of patients are on clinical trials. People always say that it should be higher, but I would actually say something different. I would say that it should be much higher, but specifically, randomized trials conducted by nonconflicted sponsors. I would say the federal government should create an agency of nonconflicted randomized control trials, where people who design the trials are experts who are without conflicts. And they should have Medicare pay for the drugs in the study. Now we depend on the company donating the drug, and the company will only donate the drug if we ask the question the way the company wants to ask it. That’s a big problem.

I also think you have to fund cancer biological science research very broadly, and in a non-fad way. Giving people high doses of chemotherapy and giving bone marrow transplants—that was in vogue for 10 years. Before that, it was multidrug combination therapy. There was targeted therapy, and now we have the immunotherapy and precision medicine. Maybe it is a little harsh to call these fads, because they had many real successes, but they are fads in the sense that in the years they were in vogue, there was little funding or attention for anything else. I don’t think funding should be subject to such fads. It should be more consistent and broad.

The transcript of this interview has been edited for length and clarity.

Subscribe to Our Newsletters

Monthly table of contents

Be notified as each issue of Managed Care is available online.

Biweekly newsletter

Recent topics have included:

PTCommunity news

New drug approvals, clinical trials, drug management. Weekly.

UP NEXT

Q&A: A conversation with Patrick J. Kennedy
By Interview by Peter Wehrwein
A Conversation with Stephen W. Schondelmeyer
By Interview by Peter Wehrwein