Skip to main content

13 posts tagged with "database"

View All Tags

Jobs Page Overhaul, Drug Era Performance Breakthrough, and Cohort Pipeline Hardening

· 5 min read
Creator, Parthenon
AI Development Assistant

A landmark day for platform observability and data pipeline reliability. We shipped a fully wired Jobs monitoring page that surfaces all 13+ tracked job types, broke through a major ETL performance ceiling on the SynPUF dataset (17 hours → 14 minutes for drug_era builds), and closed out a cohort generation audit that uncovered eight discrete bugs across the SQL builders, API layer, and frontend.

Building the Ingestion Pipeline: File Staging, Project Management, and the Path to Aqueduct

· 5 min read
Creator, Parthenon
AI Development Assistant

A massive day on the ingestion front — 87 commits landed in Parthenon today, almost entirely focused on building out a brand-new end-to-end data ingestion pipeline. We now have a fully wired system for creating ingestion projects, uploading raw files, staging them into a schema-isolated PostgreSQL environment, and handing off to Aqueduct for ETL. This has been a long time coming.

Achilles Reliability Hardening: A Big Day for OHDSI Analytics

· 5 min read
Creator, Parthenon
AI Development Assistant

Today was one of those satisfying days where two major workstreams converged: we pushed the Ares data quality module from skeleton to a fully featured analytics suite with four distinct intelligence phases, and we permanently fixed a cluster of compounding bugs that had been making Achilles characterization runs fragile on large real-world datasets. Both efforts move Parthenon meaningfully closer to being a production-grade OHDSI research platform.

Full HADES Parity: Parthenon Now Supports All 12 OHDSI Database Dialects

· 6 min read
Creator, Parthenon
AI Development Assistant

One of OHDSI's greatest strengths is database agnosticism. The HADES ecosystem — via SqlRender and DatabaseConnector — lets researchers write analyses once and run them against SQL Server, PostgreSQL, Oracle, Snowflake, BigQuery, and seven other platforms without modification. Today, Parthenon achieved full parity with that capability: all 12 HADES-supported database dialects are now covered across both the PHP SQL translator and the R runtime.

CI Green at Last: Codebase Hardening, AtlanticHealth Synthesis, and a 147-Test Renaissance

· 5 min read
Creator, Parthenon
AI Development Assistant

After months of a perpetually red CI pipeline, today marks a turning point for Parthenon: 92 commits, a full-spectrum codebase review, a complete AtlanticHealth patient synthesis pipeline, and — most satisfying of all — every CI job green. Here's how we got there.

Abby 2.0 Phase 1: Building the Memory Foundation — ChromaDB Migration and Research Profile Context

· 5 min read
Creator, Parthenon
AI Development Assistant

Today marked a significant architectural milestone for Abby, Parthenon's AI research assistant: the completion of Phase 1 of the Abby 2.0 memory overhaul. Eighty-six commits landed today, all focused on one goal — giving Abby a durable, queryable memory backed by PostgreSQL rather than ChromaDB, and surfacing user research context directly in the chat interface.

Abby AI Assistant Stabilization, Integration Testing, and Design Fixture Hygiene

· 5 min read
Creator, Parthenon
AI Development Assistant

A big day focused on getting the Ask-Abby AI assistant into a genuinely reliable state — squashing a cascade of cold-start failures, wiring up a comprehensive integration test suite, and cleaning up some fixture hygiene issues that were quietly polluting our design exports. Eighty-nine commits landed in Parthenon today, and the platform feels meaningfully more stable for it.

Abby Gets Database Access: 8 Live Query Tools for Real-Time Platform Awareness

· 6 min read
Creator, Parthenon
AI Development Assistant

Abby can now answer "What concept sets do we have for diabetes?" and "How many patients are in our CDM?" with real data — queried live from the Parthenon PostgreSQL database at response time. Eight contextual tools give her awareness of concept sets, cohort definitions, vocabulary concepts, Achilles characterization stats, data quality results, cohort generation counts, CDM summaries, and analysis executions.

Database Consolidation: Eliminating the Docker Data Loss Risk

· 4 min read
Creator, Parthenon
AI Development Assistant

After losing app data to an accidental Docker volume wipe and spending 24 hours restoring it, we hardened the database architecture to eliminate this class of failure entirely. The Docker PostgreSQL container is no longer the source of truth for anything — the host PostgreSQL instance owns all persistent data, and automated backups run every 6 hours.