The promise of artificial intelligence in the biotech and pharma sector is vast, from drug discovery to patient enrollment to clinical trial design. But the industry is also approaching the future with caution, and addressing the technology’s shortcomings is essential to making it a useful tool in the long run.
“Hallucinations” represent one of the challenges to widespread use of AI and machine learning. In a healthcare setting, the introduction of false information into the algorithms can be especially harmful, said Wael Salloum, co-founder and chief science officer at natural language processing company Mendel AI.
Currently, large language models are used to decipher patient records and guide physicians and other caretakers to the correct interventions, including drugs. If a patient is taking one drug, for example, the LLM will not push for a second treatment that can lead to complications. But when that output is designed to be trustworthy, false information is dangerous, Salloum said.
“What these LLMs are designed to do is produce really good English and convince you that’s the truth — when you’re summarizing a medical record for a doctor to make a decision, any potential mistake can be a matter of life and death,” Salloum said.
Hallucinations are defined by the more generalist LLM company ChatGPT as the “generation of content that is not based on real or existing data but is instead produced by a machine learning model's extrapolation or creative interpretation of its training data.” Many of these can be small in nature but represent bigger problems down the road, according to IBM — from the spread of misinformation to enabling bad actors to formulate a cyber attack.
Mendel’s platform, called Hypercube, combines an LLM with a logic-based AI to cut the rate of hallucinations in medical research even at large scale. The company this month joined a Google Cloud partnership giving pharmas and other healthcare companies access to the platform via the tech giant’s marketplace.
And while AI has become a massive game-changer in the life sciences, Salloum hopes platforms like Hypercube can improve the trust between AI and the researchers, scientists and doctors who use it.
“It’s very important that everything a system produces be explainable and traceable — and so many of the doctors we speak to say they spend more time verifying a summary from an original record than reading the record themselves,” Salloum said. “Any overhype of AI might cause damage to the whole industry, because once you lose trust, you don’t get it back.”
Building a trustworthy AI
Joining the Google Cloud marketplace is no accident — in truth, Hypercube works much like the Google search engine that has come to be so ubiquitous in many people’s online lives. Every time a user searches for something, the engine has already created an internal referential index rather than combing through the whole internet, Salloum said.
Similarly, Hypercube builds a knowledge base of millions of patient records that traces back to the original record, cutting the risk of hallucination. In this sense, Hypercube is more of a “research engine” than a search engine, Salloum said.
“What these LLMs are designed to do is produce really good English and convince you that’s the truth — when you’re summarizing a medical record for a doctor to make a decision, any potential mistake can be a matter of life and death."
Wael Salloum
Co-founder, chief science officer, Mendel AI
Like many AI applications, Hypercube works better when used as a tool.
“We are there to retrieve literature for review, but at the end of the day, you need human intelligence to lead, and you can use technology and AI as a support for humans, but not as a replacement,” Salloum said. “We’re nowhere close to that, and it’s important not to overhype.”
Many of Mendel’s customers are companies with data that’s ready to be used for scientific research but still too raw, Salloum said. Mendel has also helped pharma companies make connections between the data and the patient population.
Hypercube is also infused with a “world model,” or fundamental definitions within medicine that are almost philosophical in nature, Salloum said.
“What is medicine? What is a treatment? What is cancer? What is cell differentiation? These get distilled into the LLM until it’s forced to memorize that there is a certain structure that’s then fine-tuned with tasks,” Salloum said. “So the system itself is producing it and also justifying it.”
The larger purpose of AI in a healthcare setting is to provide access to information that might otherwise be out of reach, Salloum said. And making that information more trustworthy by mitigating the presence of hallucination is at the heart of that access.
“Our mission is to democratize healthcare by creating a centralized medical knowledge where everything we learn from a patient journey can be understood, from every success to every failure of a treatment,” Salloum said. “And the applications, ultimately, are infinite.”