Artificial intelligence holds the promise of accelerating scientific discovery, engineering breakthroughs, and productivity across industries. Yet the magnitude of its impact depends on three factors: the share of research tasks AI can perform, how productive AI is at those tasks, and the bottlenecks that limit overall progress. The paper presents a structured model showing that even extremely powerful AI produces limited gains if it operates on only a minority of research tasks. Conversely, broad automation across the research pipeline—far more than incremental improvements—determines whether AI can create transformative outcomes. The framework helps leaders, policymakers, and research organizations evaluate where AI will meaningfully accelerate progress and where constraints will sharply mute returns.
The New Frontier: AI in the Research Pipeline
AI has advanced rapidly from a supportive tool to a potentially central driver of research and development. Today’s models can write code, analyze data, search literature, and even propose scientific hypotheses faster than human researchers. However, as the paper highlights, the real question is not whether AI is powerful—but how its abilities translate into measurable increases in progress. Understanding where AI fits into the research pipeline reveals that some tasks naturally lend themselves to AI automation, while others remain deeply constrained by human judgment, experimentation, and validation.
Productivity Gains Are Real—But Constrained by Bottlenecks
AI’s advantages are striking on tasks it can perform: it works at scale, at high speed, and at low cost. Yet the research emphasizes a critical insight: even extraordinary improvements in AI productivity deliver limited benefits when bottleneck tasks remain human-dependent. Fields like biology, chemistry, and experimental physics rely heavily on wet labs, physical instruments, human verification, and high-stakes error sensitivity. If AI accelerates only the cognitive or computational portions of these processes, the overall rate of progress increases only modestly. The bottleneck, not the breakthrough, becomes the dominant force.
Transformative AI Requires Breadth, Not Just Intelligence
The paper argues that achieving “transformative AI”—defined as a 10x acceleration in progress—requires not only superhuman intelligence but also a substantial expansion in the share of research tasks AI can perform. With strong bottlenecks across research workflows, even millions of “genius-level” AI agents would not deliver explosive innovation unless they can replace or eliminate the human-dependent stages that limit overall throughput. This means the future of research acceleration depends less on creating smarter models and more on enabling AI systems to operate across the full spectrum of cognitive, analytical, experimental, and verification tasks.
Conclusion
AI will undeniably reshape how research is conducted, but its ultimate impact depends on the breadth of tasks it can automate—not just its intelligence. Bottlenecks across research processes will continue to constrain overall progress unless AI expands into these currently irreplaceable areas. For leaders, the implication is clear: meaningful acceleration requires investing not only in more capable AI, but also in redesigning research workflows to allow automation to reach its full potential.
References:
Benjamin Jones. "Artificial Intelligence in Research and Development". NBER. 2025 (Accessed Nov 2025, Link)
Editor: BUILD IT: Research & Publishing Team




