In 1987, the Nobel laureate economist Robert Solow made an observation that would define a generation of technology debate: “You can see the computer age everywhere but in the productivity statistics.” Nearly four decades later, his words feel prophetic once more. Billions of dollars are flowing into artificial intelligence across every sector of the global economy, yet, as Apollo chief economist Torsten Slok noted in early 2026, “AI is everywhere except in the incoming macroeconomic data.”

Nowhere is this tension more consequential than in biopharmaceutical research and development (R&D). Drug development remains one of the most expensive, time-consuming, and failure-prone endeavors in modern industry. The average cost to bring a single new drug to market is $1.3 billion, according to a 2025 RAND Corporation study that analyzed 38 drugs approved by the FDA in 2019. Timelines routinely stretch beyond a decade and clinical-phase success rates hover under 15 percent. AI promises to fundamentally reshape these economics by shortening discovery timelines, improving candidate selection, optimizing trial design, and reducing attrition. And yet, despite enormous investment, tangible demonstration of these gains at the enterprise level remains limited.

This is the AI paradox, and biopharma R&D leaders ignore it at their peril. This paper explores the nature and history of the productivity paradox, examines how it is manifesting in biopharma R&D today, identifies high-potential use cases across the drug development continuum, and proposes a framework and roadmap for organizations seeking to move through the paradox rather than be consumed by it.