Quantum IT is gaining ground quickly in the life sciences, with the growing attention of global institutions and the increase in R&D experimentation. THE The United Nations proclaimed 2025 The international year of quantum science and technology as a public awareness campaign in all sectors, including health care. The most recent quantum applications have revolved around the discovery of drugs, from quantum models Design cancer drugs at construction Best models of Alzheimer’s disease. In many of these efforts, Quantum is integrated into existing artificial intelligence (AI) models to improve power and modeling information.
It is a space of excitement and promise, but also an apprehension. Sponsors have never been subjected to more pressure to move tests faster and maximize budgets. The current AI models have already proven to be precious as tools to optimize the effectiveness of the tests – a force on which Quantum hopes to capitalize. However, bringing Quantum to the arena of clinical trials also raises concerns for data confidentiality and cybersecurity.
The pressure is on the industry to adapt to the frantic pace of the evolution of the AI and prepare for the quantum. “Six months ago, we did not have a reasoning model. Now we have a model of reasoning,” said.
Quantum potential, reality of AI
The buzz surrounding quantum computer science promises a lot – essentially a large improvement from end to each stage of the process of design, development and clinical trials.
But this future depends on the foundations built today. AI may not yet simulate the complexity of the quantum scale, but it already offers tangible and scalable improvements through the workflows of clinical trials. And unlike Quantum, which is still largely in the exploratory phase, AI is ready to implement now.
The models led by AI help sponsors to rationalize the recruitment of patients, data collection and real -time monitoring, which shortens the test deadlines as a whole. For example, sponsors often rely on diagnostic codes to select patients, but these codes are mainly designed for income cycle management, not the eligibility screening. The true wealth of data generally lies in the doctor’s notes, which AI can exploit to discover the eligibility signals which often lack structured codes.
A trial may need a three to four stadium cancer patient with three weeks of progression of the disease – prisoners “generally not captured in the income cycle management software, which motivates most real data,” said Sankarasubbu. The integration of this approach can help rationalize recruitment and reduce start -up delays.
AI can also standardize and harmonize test data between studies with very variable structures, parameters and populations of patients. This fragmentation complicates preparations for internal examination and regulatory submission. “By automating data harmonization, AI accelerates these processes and improves consistency,” he added.
The dashboards fed AI has more compliance and security by reporting serious adverse events, allowing a question led by the doctor to the test of test and the surface of critical problems thanks to intelligent visualization.
Together, these applications contribute to the proposal for the fundamental value of AI in the tests: shortening of deadlines. “”[Sponsors] Need to compress clinical trials in several small areas to obtain this compression of chronology, which will also result in cost compression, “said Sankarasubbu.
Faster tests help sponsors market products earlier, recover investments faster and maximize income during exclusivity periods. The patients, in turn, benefit from anterior access to potentially therapies that change their life, especially in non -satisfied high medical need areas.
Best sponsor practices for responsible use of AI
As the AI integrates more into clinical research, regulators run to follow the pace and have raised concerns concerning the credibility and explanation of the model. Sponsors must demonstrate not only what the model does, but also why it does and how these decisions are made, said Sankarasubbu. He recommends that sponsors prioritize the validation of the model, transparency and ethical work flows like man in the loop.
The validation of the model must start early and be clearly documented, in particular when using generative models or open source APIs. The sponsors based on a single outing of production as the models evolve consistently. In Saa, Sankarasubbu uses a “jury decision” strategy during the validation of the models.
“I am a big fan of law and order. The inspiration comes from how the jury system works in court-12 people must agree to make a decision,” he said. “Similarly here. We make models in competition against each other to achieve an agreement. No agreement means nothing. “
The sponsors must then go further and give regulators an overview of the entire test and implementation process of the model. Transparency strengthens confidence – sponsors should proactively show how models are formed, on the data they have built and how the outputs are examined.
Above all, AI is supposed to increase, and not replace, said Sankarasubbu. Each conception should have a human approving the decisions that a model generates. “We always treat it as a Netflix recommendation system,” he said. “Netflix can recommend a film, but in the end, it is deciding to watch this film or not.”
Manage hallucinations, prejudices and patients’ intimacy
The AI is powerful and can improve existing workflows in clinical trials, but it is not infallible. These models sometimes hallucinate, producing incorrect information and presenting it with confidence as factual.
In clinical contexts, hallucinations may present real risks if the models are not properly limited. “Sponsors can manage this by anchoring models in specific clinical data, using smaller and refined models formed on structured sources like Clinicaltrials.gov and historical relationships,” said Sankarasubbu. His team has published a now widely used paper Comparison of open source models to determine which are hallucinations and comparative analyzes for the use of industry.
Data bias may also present problems in AI models. Biases often do not come from the model but from the data on which it is formed. “It is not that algorithms by nature are biased. These are the data we feed and how these algorithms interpret this,” he said. “You must have control and understand the history of data collection.”
In the recruitment of patients, for example, the bias can cause downstream problems as a lack of efficiency in under-represented groups. Having a good understanding of the way in which training data comes from, that the populations are represented and gaps could exist allows proactive attenuation of biases.
In order for AI to do its job, it must manage large amounts of data sensitive to clinical trials patients. Sankarasubbu emphasizes the implementation of common medical principles to protect the privacy of patients. It is important for sponsors to disidentify patient data before training or deployment of the model and to limit access to sensitive data throughout the AI development life cycle – not simply to comply with the HIPAA and the GDPR, but to reduce exposure to cybersecurity threats.
“This should be done from the start, from the training or deployment of the model,” he said. “People who do not need access to data should not have access to this specific production data.”
Sponsors can further reduce exposure by collecting only essential data and minimizing sensitive information that the model manages. Some may also consider a federated learning approach, where models learn decentralized data sources without these data leaving its original location.
In the front for quantum and responsible innovation
Sankarasubbu predicts that Quantum will arrive at the forefront of the next five years. But the evolution of this technology does not occur in isolation.
“When these things evolve, your other aspects are also evolving a little,” he said. “It is not as if the evolution occurs in one place and does not happen to the other.”
To be “ready for Quantum”, sponsors should now focus on building governance and data protection practices of solid AI and continue to evolve in parallel with technology itself. This means structured workflows for models throughout the life cycle of clinical trials and according to best practices for responsible use of AI. This also means ensuring transparency – documenting how the model results are generated and examined, and communicate internally and with regulators to establish confidence.
In the end, those who will benefit the most from the AI and the quantum will not be those that will drive out the novelty. They will be the ones who will invest in the structure, surveillance and long -term value.
This article was written in partnership with Saaa.