The Real World as a Laboratory: Redefining Drug Discovery and Safety with AI

The real world may be a better testing ground than an actual laboratory for understanding the side effects of drugs, discovering new drugs, and repurposing existing drugs. Traditional clinical trials are the cornerstone of medical research, but, by design, do not capture the complexity of real-life behavior, such as how people actually take medications. This mismatch between “best case” and “actual use” has far-reaching implications; promising treatments may fail in practice, and valuable therapeutic insights may be missed entirely.

Dr. Michael Chary believes that artificial intelligence (AI), when grounded in real-world data and enriched with medical knowledge, can bridge that gap. His research focuses on using large language models (LLMs) and structured representations of medical reasoning to uncover hidden insights about how drugs are used, misused, or repurposed using data from social media. He has demonstrated that nonfatal opioid overdoses can be predicted from online activity with greater than 90% accuracy. Dr. Chary has also identified novel combinations of psychedelics and toxicities of banned weight loss drugs from social media.

Listening to the Crowd: How the Public Reveals a Hidden Pharmacopoeia

Every day, people self-medicate, mix prescriptions, or experiment with unapproved substances, sometimes out of necessity, sometimes out of desperation. They narrate their experiences in online spaces like Reddit, YouTube, and patient forums, providing a rich source of real-world evidence.

For instance, in the early 2000s, recreational users of ketamine noted on Lycaeum that it helped them recover from depressive crashes following stimulant use, an observation that the medical community came to a decade later. Two decades later, ketamine is FDA-approved for treatment-resistant depression.

By analyzing online commentary, Dr. Chary identifies off-label benefits, adverse reactions, or emerging drugs of abuse, at a scale that clinical trials cannot match. His approach also provides evidence of effects in humans, overcoming the limited relationship between preclinical trials and clinical efficacy.

Teaching AI to Reason Like a Clinician

Despite their power, LLMs like ChatGPT struggle to recognize essential elements of human reasoning. They misinterpret negation (“no signs of stroke”), overlook sarcasm (“I guess that worked…”), or confuse a hypothetical with a fact (“if she had taken the medication…”); limiting their ability to extract clinical information from free text.

To tackle this, Dr. Chary combines LLMs with ontologies—structured digital representations of medical knowledge that help AI reason, not just predict the next word. An ontology allows a system to infer that “applying a cervical collar” implies suspected spinal trauma, or that a drug name ending in “fentanil” is likely to be a potent opioid analog. It also makes that reasoning transparent, solving a “black box” problem that limits the application of LLMs in clinical care. He has combined LLMs and ontologies to diagnose poisonings into two software packages, Tak and Jarvis.

Ontologies give AI a scaffolding to reason by abstraction rather than inductive inference from statistical patterns in the data. The result is a system that understands medical language in a deeper, safer, and more useful way.

Prehospital Medicine: Making Sense of Fragmented Notes

Outside of toxicology, Dr. Chary is collaborating with Dr. Junaid Razzak to apply this augmented parsing to prehospital care. When paramedics respond to a 911 call, they often document events in terse, jargon-laden phrases: “ETOH +, GCS 13, vom x2.” These fragments require substantial interpretation by human providers downstream. Expanding abbreviations does not suffice.

By integrating ontologies with natural language processing, Dr. Chary’s models can extract meaning from prehospital records, even when critical information is implied rather than stated. That helps hospitals anticipate patient needs, public health departments spot trends, and clinicians better understand how interventions outside the hospital affect outcomes.

A New Model for Listening at Scale­­

“By doing this research, we will be able to listen to people deeply—even in this digital age—learning from their collective wisdom, identifying new treatments, and uncovering missed opportunities in care,” says Dr. Chary. “This also gives a voice to marginalized communities, who may feel more comfortable communicating online than engaging directly with the healthcare system.”

 That idea echoes the timeless wisdom of William Osler, a founder of modern medicine:

“If you listen to the patient, they will tell you the disease.”

And sometimes the treatment. 

Mailing Address
New York-Presbyterian Hospital
Weill Cornell Medical Center
Department of Emergency Medicine
525 E. 68th St., Box 179
New York, NY 10065

Office of the Chair
Emergency Medicine
525 E. 68th St., M-130
New York, NY 10065
(212) 746-0780

Research Office
525 E. 68th St., M-130
New York, NY 10065
EMResearch@med.cornell.edu

Leading Emergency Care