AI Could Transform Clinical Trials—and the Pharma Business

Welcome back to In the Loop, TIME’s new twice-weekly newsletter about AI. If you’re reading this in your browser, why not to have the next one delivered straight to your inbox?
Who to Know: Ben Liu, CEO, Formation Bio
Much is discussed about how AI is speeding up drug discovery. Yet, the number of FDA-approved drugs has stayed steady amid the AI revolution, at roughly 50 annually. “For a long time, the main hurdle in getting new medicines to patients hasn’t been drug discovery,” states Ben Liu, founder and CEO of Formation Bio, an AI firm operating in biotech. The true bottleneck, he notes, lies in conducting clinical trials—processes that can span years and cost hundreds of millions of dollars.
Formation Bio, supported by prominent investors like Sam Altman and Michael Moritz, is instead applying AI to this phase of the process. The company asserts it can cut trial time by up to 50% by using AI to expedite administrative tasks, such as patient recruitment, regulatory submissions, and aligning drugs with specific illnesses. (Formation does not employ AI to speed up the trial’s treatment phase—the period when the drug is actively tested on patients—but rather the administrative and analytical tasks before and after.)
Their business model entails purchasing three to four promising drugs annually, conducting trials themselves, and selling successful candidates for significant profit. To date, Liu reports, they’ve successfully sold two drugs: one to Sanofi in a €545 million deal, and a second—where they held a minority stake—to Eli Lilly, with the total sale value just under $2 billion.
“A key motivation is our belief that a better pharmaceutical company can be built,” Liu remarks. “If trials can be conducted more affordably and quickly, and instead of 100,000 employees, only 100 people are needed—using these AI systems to handle most knowledge-based tasks—then drugs can be made more accessible and affordable.”
What to Know: U.S. snubs international AI safety report
The Trump Administration refused to endorse a global intergovernmental report highlighting risks tied to AI’s rapid development, as reported by my TIME colleague Harry Booth for TIME.
The second International AI Safety Report, released today and led by Turing Award-winning scientist Yoshua Bengio, was co-signed by 30 governments and international bodies, including China, the U.K., and the European Union. Its aim was to build a common understanding of rapidly emerging evidence on AI’s risks, enabling better governmental management. However, with the global leader in AI development declining to sign the report for the first time, this goal has been cast into doubt at a pivotal juncture.
What the report says — Contrary to claims that AI progress is stagnating, the report states, “over the past year, the capabilities of general-purpose AI models and systems have continued to advance.” The authors acknowledge uncertainty about how long this pace will persist—and whether AI will eventually, as some top CEOs predict, outperform humans in most economically valuable tasks. Yet they argue it would be reckless to disregard this possibility. “A prudent strategy, whether in government or business, is to prepare for all plausible scenarios,” Bengio tells TIME.
The risks — The report further finds that risks long warned about with advanced AI—such as aiding non-experts in creating bioweapons—are gaining stronger scientific consensus, despite lingering doubts. It notes there is already robust evidence that current AI systems are being utilized by criminal groups and state-backed attackers to expand the scale and speed of their cyber operations.
AIs behaving badly — Another risk category is accumulating evidence: the unsettling propensity of AI systems to sometimes act against their creators, including by concealing problematic behavior when aware they’re being tested. Since January 2025, the report states, “models have demonstrated more sophisticated planning and oversight-evading capabilities, complicating the assessment of their abilities,” though it acknowledges that expert views on the likelihood of humans losing control of AI systems “differ widely.”
AI in Action
A creative application of image-generation tech: a tool that takes an architectural rendering as input and produces an image of “how it will truly appear on a random Tuesday in November.”
What We’re Reading
in The Washington Post
Gerrit De Vynck writes: “In 2024, Google violated its own policies prohibiting AI use for weapons or surveillance by assisting an Israeli military contractor in analyzing drone video, a former Google employee alleged in a confidential federal whistleblower complaint reviewed by The Washington Post. Google’s Gemini AI technology was employed by Israel’s defense forces during a period when the company was publicly distancing itself from the nation’s military.”