The Paradox of Personalized Medicine

Topics:
Pharmaceutical and Medical Device Policy Population Health

Personalized medicine is one of those ideas that seems so obviously good it barely warrants debate. Why wouldn’t we want to tailor treatments to each patient’s unique biology? It’s like choosing a tailored suit over one marked “medium.” In some areas of medicine—especially oncology—we’ve already made this pivot. Instead of asking, “Where is the tumor?” we now ask, “What mutation drives it?” That’s not science fiction; it’s already how we treat certain forms of lung cancer: Full genome sequence healthy cells from the patient’s body and compare those to the sequence for the cancerous cells, look for differences, and bingo!, identify the mutations that are driving the malignancy.  Step two, target that gene or its downstream products to knock down the cancer.

In fact, lung cancer has been a kind of laboratory for this new medical paradigm. Two decades ago, nearly all patients were treated with chemotherapy, with limited success. Now, thanks to genomics, doctors can now sort tumors based on mutations in genes like EGFR, ALK, ROS1, and others. These genetic markers guide treatment decisions, matching patients with targeted drugs that can significantly extend life—sometimes by years, not months. Osimertinib, for example, extends progression-free survival by more than 18 months for patients with a common EGFR mutation.¹

But as with many innovations, the brighter the promise, the darker the shadow. Behind this progress is a growing crisis—one that few people outside the field are talking about. That is, the more precise we get, the smaller each eligible patient group becomes. At the outset, a mutation might be found in 10–15% of patients. That’s workable. But increasingly, researchers are identifying mutations that affect just 1% of patients—or less. Some genetic subtypes are so rare they don’t even register in national datasets. That’s where personalized medicine runs into a systemic problem: the economic and regulatory machinery we use to deliver treatments wasn’t built for micro-scale medicine.

A recent paper by economists Marina Chiara Garassino, Kunle Odunsi, Marciano Siniscalchi, and Pietro Veronesi lays out this dilemma in clear—and troubling—terms.2 Their conclusion: We face a paradox. Namely, under the current regulatory model of drug development and approval, personalized medicine becomes economically infeasible as therapies get more individualized. We are, in short, approaching a breaking point.

Here’s how the current drug approval process works. A company discovers a potential therapy. It then runs multiple stages of randomized controlled trials (RCTs), usually starting with small safety trials and ending with large-scale studies involving hundreds or thousands of patients. These trials take years and cost hundreds of millions of dollars.³ The goal is to show, with statistical confidence, that the new treatment is better than the extant standard of care.

That’s doable when the condition affects millions of people. But when you’re developing a drug for, say, a mutation that affects 0.2% of lung cancer patients—maybe a thousand people nationally—it becomes nearly impossible to recruit a sufficient sample. Many of those patients will be too sick, too scattered, or too advanced in their cancer to qualify. By the time the trial begins, some patients will have died waiting. Moreover, notwithstanding these logistical challenges, with such a small potential patient pool, it’s simply not worth it to the drug company to run trials at all.

The recent economic model by Garassino and colleagues puts hard numbers behind this dynamic. Once a mutation affects fewer than 0.6% of patients, the expected return on investment drops to zero—even if the drug works beautifully.2 It’s not that the science fails; it’s that the math fails. Companies can’t recoup their costs without charging prices that make Martin Shkreli look like a bargain shopper.

And just how high would those prices need to be? For mutations affecting 0.05% of patients, a drug would need to cost over $20 million per patient per year to break even.4 That’s not a hypothetical number or a hyperbolic one. It’s not just a health care crisis; it’s a moral absurdity. No insurer—private or public—will pay that. No budget line can absorb it. So the drug doesn’t get produced. Not because we can’t make it, but because we’ve designed a system that only makes sense at scale.

We’re already seeing the fallout. In 2023, Roche pulled out of a partnership to develop Pralsetinib, a promising drug for RET-positive lung cancer. The reason? Too few patients, too much competition, not enough profit potential.5 This is the new normal. Increasingly often, promising drugs will never make it past the spreadsheet.

And so, paradoxically, the better we get at understanding disease at the molecular level, the harder it becomes to act on that knowledge. We’re in a situation where the science is advancing faster than the economics—or the institutions that translate that science into patient care. As we move from blockbuster drugs to bespoke therapies, we’re entering a domain where the regulatory infrastructure, clinical trial norms, and intellectual property laws no longer fit the world they’re supposed to govern.

This isn’t just a niche concern for oncologists and pharmaceutical executives. It’s a system-level challenge that mirrors the same friction we’ve seen elsewhere when scale changes but the rules don’t. Think about how public education has struggled with individualized learning plans, or how our tax code buckles under gig work and platform economies. In each case, institutions designed for homogeneity are asked to handle increasing differentiation—and mostly, they fail.

So, what’s the workaround?

Garassino and coauthors suggest a compelling pivot: instead of approving every single personalized therapy through a full-blown clinical trial, we should approve the process by which these therapies are discovered. Think of it as certifying a 3D printer rather than every single item it produces. If the printer reliably makes safe, effective products, we don’t need to test every spatula or water pitcher that comes out of it.

The science already exists to support this shift. Tools like AlphaFold3 can predict how proteins fold with uncanny precision. If you know how a mutated protein behaves, you can identify molecules that block it. Artificial intelligence (AI) models can now screen billions of compounds, run simulations, and predict drug safety profiles in silico—all without stepping into a wet lab.6 The entire process, from mutation discovery to viable drug candidate, can take weeks, not years.

The authors call this a Personalized Drug Discovery Process (PDDP). Here’s how it works: identify a patient’s mutation; use AI to model the faulty protein; simulate compound interactions; evaluate likely safety and bioavailability; synthesize the compound. Do it all within an approved process framework. Once the process is certified, it can be applied repeatedly, adapting to each patient’s biology without requiring a separate RCT each time.

If this sounds radical, it’s not. Other industries do this all the time. We don’t test every jet engine bolt individually—we test the process that makes them. We don’t inspect each MRI scan—we trust the approved machine. Even in medicine, diagnostics are often cleared by platform. What’s missing is a regulatory mechanism that does the same for personalized therapeutics.

And here’s the kicker: the economics work. If therapies developed via PDDP yield even 30–70% of the profits of traditional drugs, companies still come out ahead.That’s because the most expensive part of development—clinical trials—shrinks dramatically. And from a societal standpoint, the cost savings are staggering. No more $20 million therapies. No more abandoning patients because they’re too rare.

That shift also reorients innovation. Right now, drug discovery is biased toward the statistically common and financially safe. But what if it were economically viable to develop a therapy for a child with a one-in-a-million mutation? What if the long tail of the genetic distribution were not a graveyard of unmet need, but a frontier of targeted innovation? That’s what PDDP unlocks—not just affordability, but possibility.

Of course, none of this will be easy. The FDA and its global peers would need new frameworks to evaluate process-based discovery. AI models would need to be transparent and auditable. Real-world patient outcomes would need to be tracked rigorously. There would be new questions about intellectual property: Who owns the process? Who gets paid when it’s used? And how do we ensure equitable access?

But these are problems worth solving. The alternative is to let personalized medicine stagnate—not because we lack the technology or the therapies, but because we failed to imagine a system that could actually deliver them. The danger isn’t that personalized medicine will fail. The danger is that it will succeed scientifically—and collapse economically.

References

1

Soria, J.C., Y. Ohe, J. Vansteenkiste, T. Reungwetwattana, B. Chewaskulyong, K.H. Lee, A. Dechaphunkul, F. Imamura, N. Nogami, T. Kurata, and I. Okamoto. 2018. Osimertinib in untreated EGFR-mutated advanced non–small-cell lung cancer. New England Journal of Medicine, 378(2), pp.113-125.

2

Garassino, M.C., K. Odunsi, M. Siniscalchi, and P. Veronesi. 2025. On the economic infeasibility of personalized medicine, and a solution. NBER Working Paper 33539 http://www.nber.org/papers/w33539.

3

DiMasi, J. A., H.G. Grabowski, R.W. Hansen. (2016). Innovation in the pharmaceutical industry: New estimates of R&D costs. Journal of Health Economics, 47, 20–33.

4

Garassino (2025), op. cit.

5

Blueprint Medicines.2023. Blueprint Medicines to Regain Global Rights to GAVRETO® (pralsetinib) from Roche. [online] www.prnewswire.com. Available at:https://www.prnewswire.com/news-releases/blueprint-medicines-to-regain-global-rights-to-gavreto-pralsetinib-from-roche-301754042.html.

6

Jumper, J., R. Evans, A. Pritzel, T. Green, M. Figurnov, O. Ronneberger, K. Tunyasuvunakool, R. Bates, A. Žídek, A. Potapenko, and A. Bridgland. 2021. Highly accurate protein structure prediction with AlphaFold. Nature, 596(7873), pp.583-589.

7

Garassino et al. (2025), op. cit.


Citation:
Conley D. The Paradox of Personalized Medicine. Milbank Quarterly Opinion. April 5, 2025. https://doi.org/10.1599/mqop.2025.0405.


About the Author

Dalton Conley is the Henry Putnam University Professor in Sociology at Princeton University and a faculty affiliate at the Office of Population Research and the Center for Health and Wellbeing. He is also a research associate at the National Bureau of Economic Research (NBER), and in a pro bono capacity he serves as dean of health sciences for the University of the People, a tuition-free, accredited, online college committed to expanding access to higher education. He earned an MPA in public policy (1992) and a PhD in sociology (1996) from Columbia University, and a PhD in Biology from New York University in 2014. He has been the recipient of Guggenheim, Robert Wood Johnson Foundation and Russell Sage Foundation fellowships as well as a CAREER Award and the Alan T. Waterman Award from the National Science Foundation. He is an elected fellow of the American Academy of Arts and Sciences and an elected member of the National Academy of Sciences.

See Full Bio