Skip to main content

13 posts tagged with "architecture"

View All Tags

Introducing Parthenon: Transforming Healthcare with AI-Powered Outcomes Research

· 6 min read
Creator, Parthenon

Pinned Post | Originally published March 7, 2026

Outcomes research has evolved alongside the broader arc of healthcare analytics infrastructure. Early siloed clinical systems produced fragmented administrative and claims data with limited analytic utility — adequate for billing, but structurally unsuitable for longitudinal cohort construction or comparative effectiveness work. The meaningful use era expanded the availability of structured clinical data, yet interoperability failures meant that patient journeys remained fractured across institutional boundaries, undermining the real-world evidence studies that outcomes researchers depend on. The shift to integrated analytics platforms — particularly the adoption of common data models like OMOP/OHDSI — marked a genuine inflection point: federated network studies, standardized phenotyping, and reproducible retrospective analyses became operationally feasible at scale. Now a fourth generation is taking shape, one in which AI-augmented clinical intelligence moves outcomes research from retrospective description toward prospective, near-real-time evidence generation — enabling dynamic cohort surveillance, treatment heterogeneity detection, and value-based care signal identification that was previously impractical outside of narrow clinical trial settings.

Parthenon is built for this fourth generation.

100% Concept Coverage: How Parthenon Built MedDRA-Equivalent Clinical Navigation on SNOMED CT

· 20 min read
Creator, Parthenon
AI Development Assistant

Parthenon's Vocabulary Search now provides 100% navigational coverage of all 105,324 standard SNOMED CT Condition concepts through 27 curated clinical groupings — achieving functional parity with MedDRA's System Organ Class navigation while preserving SNOMED's superior clinical granularity. This is the story of diagnosing the SNOMED-OMOP domain boundary problem, engineering a cross-domain hierarchy builder, curating a clinically intelligent grouping layer, and systematically closing every coverage gap until no standard concept was left behind.

One Million Patient Embeddings: GPU-Accelerated Similarity Search Comes to Parthenon

· 20 min read
Creator, Parthenon
AI Development Assistant

Two days ago, we shipped the Patient Similarity Engine — a multi-modal system that scores patients across six clinical dimensions on OMOP CDM. The architecture was sound. The algorithms worked. But there was a problem hiding in plain sight: none of our patients had embeddings.

The embedding pipeline had been silently failing since day one. Three type mismatches between our PHP backend and Python AI service meant that every embedding request returned a validation error, was caught by a try/catch block, and logged as a warning that nobody read. The feature vectors were all there — conditions, drugs, measurements, procedures — but the 512-dimensional dense vectors that would make similarity search fast at scale? Zero. For every source. For every patient.

Tonight, we fixed all three bugs, refactored the embedding pipeline from CPU-only SapBERT to GPU-accelerated Ollama, upgraded from 512 to 768 dimensions, introduced batch deduplication that delivered a 123x throughput improvement, and generated embeddings for 1,007,007 patients across three CDM sources. This is the story of what broke, what we built, and what it unlocks.

Patients Like Mine: Building a Multi-Modal Patient Similarity Engine on OMOP CDM

· 18 min read
Creator, Parthenon
AI Development Assistant

For twenty years, the question "which patients are most like this one?" has haunted clinical informatics. Molecular tumor boards want to know: of the 300 patients in our pancreatic cancer corpus, which ones had the same pathogenic variants, the same comorbidity profile, the same treatment history — and what happened to them? Population health researchers want to seed cohort definitions not from abstract inclusion criteria but from a concrete index patient. And every clinician who has ever stared at a complex case has wished for a button that says show me others like this.

Today, Parthenon ships that button. The Patient Similarity Engine is a multi-modal matching system that scores patients across six clinical dimensions — demographics, conditions, measurements, drugs, procedures, and genomic variants — with user-adjustable weights, dual algorithmic modes, bidirectional cohort integration, and tiered privacy controls. It works across any OMOP CDM source in the platform, from the 361-patient Pancreatic Cancer Corpus to the million-patient Acumenus CDM.

This post tells the story of why it was needed, what we studied before building it, how it works under the hood, and what we learned along the way.

Poseidon and Vulcan: The Gods of Continuous Data Ingestion

· 12 min read
Creator, Parthenon
Poseidon and Vulcan — the gods of continuous data ingestion

Healthcare data does not arrive in neat packages. It streams — continuously, chaotically, from dozens of transactional systems that were never designed to talk to each other. EHR encounters appear as HL7 ADT messages. Lab results materialize through OBX segments hours after the draw. Radiology reports surface from PACS archives with inconsistent coding. Claims trickle in from clearinghouses days or weeks after the visit. Genomic panels arrive as VCF files from external laboratories with their own nomenclatures and timelines.

Transforming this unruly sea of clinical data into a coherent, research-ready OMOP Common Data Model is the central engineering challenge of any outcomes research platform. And until now, Parthenon handled it the same way most platforms do: as a series of one-time bulk loads. Upload a file. Map the concepts. Write the CDM. Move on.

That era is over.

Today we introduce two new engines to the Parthenon pantheon — Vulcan and Poseidon — purpose-built for the reality of continuous healthcare data integration.

Building a Clinically Intelligent Risk Scoring Engine on OMOP CDM

· 11 min read
Creator, Parthenon
AI Development Assistant
Tyche, Greek goddess of fortune and chance

In Greek mythology, Tyche was the goddess of fortune, chance, and prosperity. Depicted with a cornucopia of abundance and the wheel of fate, she governed the unpredictable forces that determined whether a city would flourish or fall. The ancient Greeks understood that outcomes are shaped by forces beyond individual control — health, circumstance, and probability. In the Parthenon pantheon, Tyche presides over population risk scoring: the quantification of clinical probability, the stratification of patients by the likelihood of outcomes they cannot fully control, and the transformation of uncertainty into actionable intelligence.

We built a population risk scoring engine that runs 20 validated clinical risk calculators against any OMOP CDM dataset — then immediately realized the approach was wrong. This post covers what we built, why we tore it apart, and the v2 architecture that replaced "run everything on everyone" with cohort-scoped, recommendation-driven clinical risk analysis.

The Magical Ladies of Parthenon

· 11 min read
Creator, Parthenon
AI Development Assistant

In Greek mythology, the great temple atop the Acropolis housed not just Athena, but an entire pantheon of divine figures — each wielding a unique gift. Parthenon, our unified OHDSI outcomes research platform, follows the same philosophy. Behind the scenes, four mythological women power the intelligence layer that transforms raw clinical data into actionable research: Hecate, Phoebe, Ariadne, and Arachne.

Full HADES Parity: Parthenon Now Supports All 12 OHDSI Database Dialects

· 6 min read
Creator, Parthenon
AI Development Assistant

One of OHDSI's greatest strengths is database agnosticism. The HADES ecosystem — via SqlRender and DatabaseConnector — lets researchers write analyses once and run them against SQL Server, PostgreSQL, Oracle, Snowflake, BigQuery, and seven other platforms without modification. Today, Parthenon achieved full parity with that capability: all 12 HADES-supported database dialects are now covered across both the PHP SQL translator and the R runtime.

Welcome to Acropolis: One Command from Clone to Production

· 13 min read
Creator, Parthenon
AI Development Assistant

Eighteen Docker services. Three environment files. A reverse proxy with auto-TLS. Database admin GUI. Container management dashboard. Enterprise SSO. And if you want the full stack? One command:

python3 install.py --with-infrastructure

This is the story of how we built Acropolis — the infrastructure layer that turns Parthenon from a research application into a production platform — and what we learned when we decided to ship it inside the same repository.

The Rise of Darkstar: How We Rebuilt the OHDSI R Runtime for Production

· 16 min read
Creator, Parthenon
AI Development Assistant

Every platform has a weak link. For Parthenon, it was the R container.

PHP handled 200 concurrent API requests without breaking a sweat. Python served AI inference with async workers. PostgreSQL managed million-row queries across six schemas. Redis cached sessions at sub-millisecond latency. And then there was R — single-threaded, fragile, running bare Rscript as PID 1 with no supervision, no timeouts, and a health check that lied.

This is the story of how we tore it down and built Darkstar — a production-grade R analytics engine that runs OHDSI HADES analyses concurrently, recovers from crashes automatically, and executes 35% faster than the container it replaced.