Buried in the Medical Record: How Dyania’s AI Reads Between the Lines

Somewhere inside every hospital, there’s a patient whose story is sitting quietly in the electronic medical record—thousands of lines of text, physician notes, and lab results that only a human can read. The data that could change that patient’s life is there, buried in plain sight. But no one has time to find it.

For decades, doctors, nurses, and researchers have been overwhelmed by this reality. Eighty percent of medical data exists as unstructured text—progress notes, pathology reports, imaging summaries—written and stored, but rarely analyzed. In a busy clinic, a physician may have minutes with a patient, not hours to comb through years of history. In clinical research, those missed details can mean that the right patients never get connected to potentially life-changing therapies in time.

That’s the problem Eirini Schlosser set out to solve. A serial entrepreneur with a background in investment banking and a family full of physicians, Eirini saw both the power of data and the cost of leaving it untouched. Her company, Dyania Health, developed Synapsis AI, a medically trained large language model that can read and interpret an entire patient record in seconds—surfacing patterns, identifying eligibility for trials, and ultimately informing better, faster care.

Today, Dyania is working with some of the nation’s leading health systems, including Cleveland Clinic, to prove that AI can do more than speed up chart review—it can expand access to care and accelerate discovery itself.

In this conversation, Eirini shares how she went from working on M&A deals to unlocking the hidden intelligence inside hospital firewalls, what it feels like when AI outperforms humans in a clinical setting, and how HealthX Ventures has helped Dyania scale its impact far beyond clinical trials.

What was the moment when you realized unstructured medical data was the problem you had to solve? Can you paint a picture of the gap you saw that others were missing?

ES: I was working on the tech side of M&A at Morgan Stanley and started working with some founders who were exiting their businesses. That’s when I fell in love with what we now call AI—although at the time, around 2013–2014, it wasn’t even called that.

I started moonlighting my first business, which had nothing to do with healthcare. And people often joke that first-time entrepreneurs fall in love with a solution, while second-time entrepreneurs fall in love with a problem. My first solution was a system that would ingest and read Yelp and Google reviews and then predict or recommend places to go. It was like a “Netflix for where to go” for food and drink.

That got me into the natural language processing space. We were working with free text and downloading something like a million API calls to Yelp on a daily basis, getting the entire database every day. Long story short, I started pursuing efforts in what, at that time, was natural language understanding—this was before the advancement of transformers and large language models.

With that first business, I was a first-time entrepreneur, and the product was really designed for a specific group of individuals who go out to new places on a regular basis. It turned out to be a very small market. Most people don’t go out to new places weekly; they have their favorites they go to on repeat. As we started to realize that launching new cities was expensive and the market was limited, I began working with some of my investors in advisory roles with different startups.

A few of those angel investors actually owned hospitals, and they started asking my thoughts on the value of EMR data from a clinical research perspective.

If we take a step back: I grew up in a family of physicians. The “black sheep” in my family is one cousin who is a pharmacist—that gives you a sense of how medical my family is. I was going down that path too. I was majoring in biochemistry and about to go into medicine. Thankfully, due to family relationships, I had the opportunity to do clinical rotations early.

During those rotations, I found standard-of-care medicine rather routine, and I really missed mathematics, problem solving, and working on new problems. Because it’s never going to be okay to do experimental medicine on patients, I decided to pursue a different path.

So through a hop and a skip—leaving medicine, accidentally falling into investment banking—I eventually went back to what was always my true calling: the entrepreneurial side, specifically within AI. By the time I started my second business, I already knew how to code and could set up instances and stacks on my own.

When my investors asked me to help them understand EMR data value, I dove right in. I realized, or rather remembered, that chart review is completely manual, and it can take clinicians anywhere from 30 minutes to two hours to review a single patient’s history.

That’s when it started clicking in my head that there was this absurd insult to humanity: EMR data was being stored in databases—billions of records typed in daily—and yet it was never informing patient care, simply because no one had time to read the medical records.

So I set out to automate electronic medical record chart review. That was the mission from the beginning. This was before ChatGPT became a global phenomenon and even before large language models were widely publicized. We pivoted early, in 2022, to an LLM-based system. Most of my AI research team had come out of Big Tech; they were among the very small group of people globally who had experience training large language models at the time.

The pandemic, for me, was actually a blessing in disguise because we could build in stealth. This is not a problem for the faint of heart—it requires deep consideration of data management and cleaning, as well as how medicine is delivered and the logic behind clinical acumen today.

Let’s say I’m a patient with a rare heart condition being seen at Cleveland Clinic. How would my care experience look different because of Synapsis AI?

ES: I actually think the biggest opportunity for a different experience happens before that patient is even diagnosed.

What we’re doing is literally reading to answer specific clinical questions about a patient’s history. That means any given set of characteristics in that patient’s history could be relevant for a protocol. For example, one of the principal investigators we’re working with at Cleveland Clinic said, “I want to find every patient who has this set of, say, 10 characteristics that I would view as high risk for cardiac amyloidosis.” These are patients who have these characteristics but have never been diagnosed.

They wanted to refine a protocol around which earlier characteristics in the patient journey ultimately led to a successful diagnosis—so essentially a retrospective study. The ultimate goal is that we can say, “Okay, now we’ve perfected the recipe—the algorithm that tells Synapsis AI: go find all these patients, and do it very quickly.”

We’ve done similar work with another healthcare system. In cardiac amyloidosis, for example, the condition is heavily underdiagnosed. Patients may present with non-cardiac symptoms, like carpal tunnel syndrome, and other issues that eventually add up to an amyloidosis diagnosis. Even something like cardiac wall thickness can have different thresholds for men and women. Right now, the belief is that most cardiac amyloidosis patients are men—but that could be because women are underdiagnosed if they don’t meet those historical thresholds.

So, in terms of impact on patient care, step one is cleaning and processing historical data so it can inform how research is delivered. Step two—which we are already actively working on in parallel—is impacting patient care in real time.

For example, we’re working with the Cancer Center at Cleveland Clinic on evidence-guided care pathways. At each point in time, there can be a data-validated step that says: this patient has this tumor stage, they’ve already had a resection, and now we’re looking at treating with a particular drug. Evidence-based guidelines may say that’s the correct next step. In an ideal world, that entire set of decisions is data-driven. That’s where you really start to see how impactful this can be for patients.

For clinicians drowning in patient charts, what does your technology feel like in practice?

ES: Globally—not just at Cleveland Clinic—physicians are heavily understaffed. There’s a shortage of cardiologists, for example. They’re running from patient to patient and under pressure to keep volumes up because there aren’t enough visits to see all the patients whose diseases have progressed.

They’re trying to maximize their time with patients, and the reality is they don’t have time to read two hours of a patient’s history for every visit. It’s simply a math problem: how many hours exist in the day?

So it’s not even that they’re drowning in charts; the status quo is effectively zero. The data is being typed, recorded, and stored, and then never read again. I genuinely mean that. It might be read by chart abstractors for registries, or skimmed by research coordinators looking for trial patients who glance at the last few notes. But the vast majority of that data, at a global level, is not drowning anyone—because they’re already drowning in just doing their day job.

What we want to do is automate the work that isn’t even happening right now and allow those insights to actually inform clinical care.

Many patients miss narrow windows of eligibility for certain therapies or trials. Can you give a real story of how Synapsis AI helped ensure a patient wasn’t “lost in the shuffle”?

ES: We had a project at another healthcare system—one we haven’t publicly announced yet—where we were tasked with finding patients who had severe or very severe aortic stenosis, plus the onset of certain symptoms and other criteria. Importantly, these were patients who had not been referred for an aortic valve replacement.

From the onset of symptoms, if these patients don’t get an aortic valve replacement, they essentially have about one year to live, maybe two at most. So if they don’t get referred, it’s quite literally a matter of life and death.

We found 488 patients. Some had already died by the time we hit “go.” For others, by the time the team tried to contact them, they couldn’t be reached. But about 85 patients were able to start coming in for an appointment.

If you take a step back and look at 488 patients, that’s almost 3,000 years of life saved. And these are fully reimbursable procedures for the healthcare system—procedures like transcatheter aortic valve replacement. So we were able to both save patients and drive revenue for the health system. It’s a win-win for everyone.

The Cleveland Clinic partnership is a big milestone. Can you share a behind-the-scenes story from the pilot—something that showed you this technology could scale enterprise-wide?

ES: I think the biggest lightbulb moment for everyone was an organized day we did with the Cancer Center. We brought in research nurses and gave them the same task as Synapsis AI: read and answer questions about specific notes.

They recognized some of the notes and physicians, because this was all within Cleveland Clinic, inside their firewall. It was the same task, side-by-side.

What was incredible for us was that Synapsis AI outperformed the humans. The nurses were reading at a pace where, naturally, they sometimes missed information. We measured both human and AI performance against a ground truth, which was a consensus of what multiple people agreed was the correct answer.

When we surpassed individual human performance, that was really a watershed moment. We ended up with a poster at ASCO and publications coming out of that work.

On that specific day, everyone was in a room together. The nurses were there for the full day—about nine hours—and then had homework afterward. Meanwhile, our AI research team came in, showed them what was happening under the hood, and then hit “go.” A loading bar appeared, it ran, and then suddenly it was done.

The clinicians said, “What do you mean, that’s it? There’s nothing else to demo?” And we said, “No, here are the results.” We literally emailed them the results on the spot. They just stood there in shock that it had already read everything.

Moments like that really built understanding of how the technology works and, importantly, trust in its accuracy. That day was a big moment during the pilot period.

How do you ensure trust and transparency for physicians—so they can see not just an AI’s conclusion, but the why behind it?

ES: We actually train our models to have to explain themselves.

Over a third of our full-time team are physicians. They complete cases on our proprietary dataset of about 9 million de-identified EMRs that are longitudinal, going back in history. Our physicians read a medical record, set up a disease-appropriate question, the correct answer, and a justification. That justification pinpoints the exact part of the medical record that led them to that answer.

We then use that as training data to force the AI to explain itself. Having that clinician-annotated dataset has made a very significant difference in the quality of our training data and, as a result, in how the AI performs.

Now when we ask the system a question, it answers and provides a justification. That justification effectively acts as an audit trail for how the AI arrived at its conclusion.

Beyond the capital, what has HealthX Ventures contributed to Dyania’s journey? Any specific introductions or strategic support that made a tangible difference?

ES: We were recently introduced, through HealthX, to one of the largest healthcare systems in the U.S.

I also want to call out the media, press, and visibility we’ve had. HealthX introduced us to our PR and comms director, who has been fantastic to work with. The combination of that PR support plus introductions to large healthcare systems has really influenced where we see our pipeline going in 2026.

If we imagine five years from now, what will be possible in healthcare because Synapsis AI exists?

ES: Our mission is literally to unlock all of the embedded information in the electronic medical record, so physicians can deliver informed, improved, and data-validated care.

I think that changes the entire nature of how medicine is delivered today. Right now, physicians are very busy and are often going off their individual experience. Even clinical research that drives guidelines is frequently based on observational work in only a few hundred patients.

That looks very different when you can validate medical decisions against what has been done in a de-identified dataset of a million patients, for example.

What excites you most about the untapped potential of unstructured medical data?

ES: For us, it’s really about truly mimicking medical reasoning.

If you take a foundational model off the shelf, it’s typically set up to read one note and answer questions about that one note—so a single-note, single-question, single-answer pair.

In the real world, a physician reads an entire medical record—dozens of notes, labs, imaging, pathology—to deduce what might be a single conclusion from that record. I think AI models in other industries have grossly underestimated both the complexity of how the data is stored and the level of medical reasoning that needs to be layered on top.

We’re most excited about what we’re doing in that field: building models that can reason across an entire longitudinal record, not just one note at a time.

Learn more about Dyania at dyaniahealth.com.

Photo by cottonbro studio.

Next
Next

The Power of Early: How CancerIQ Helps Patients Catch Risk Before Diagnosis