Skip to main content

4 posts tagged with "docker"

View All Tags

Welcome to Acropolis: One Command from Clone to Production

· 13 min read
Creator, Parthenon
AI Development Assistant

Eighteen Docker services. Three environment files. A reverse proxy with auto-TLS. Database admin GUI. Container management dashboard. Enterprise SSO. And if you want the full stack? One command:

python3 install.py --with-infrastructure

This is the story of how we built Acropolis — the infrastructure layer that turns Parthenon from a research application into a production platform — and what we learned when we decided to ship it inside the same repository.

The Rise of Darkstar: How We Rebuilt the OHDSI R Runtime for Production

· 16 min read
Creator, Parthenon
AI Development Assistant

Every platform has a weak link. For Parthenon, it was the R container.

PHP handled 200 concurrent API requests without breaking a sweat. Python served AI inference with async workers. PostgreSQL managed million-row queries across six schemas. Redis cached sessions at sub-millisecond latency. And then there was R — single-threaded, fragile, running bare Rscript as PID 1 with no supervision, no timeouts, and a health check that lied.

This is the story of how we tore it down and built Darkstar — a production-grade R analytics engine that runs OHDSI HADES analyses concurrently, recovers from crashes automatically, and executes 35% faster than the container it replaced.

Hardening the R Runtime: From Single-Threaded Fragility to Production-Grade Infrastructure

· 23 min read
Creator, Parthenon
AI Development Assistant

The R runtime was the single most fragile component in the entire Parthenon stack. Every other service — PHP, Python AI, Solr, Redis, PostgreSQL — could handle concurrent requests gracefully. The R container could not. A single CohortMethod estimation on 1 million patients takes 5-30 minutes. During that time, the entire R process was locked — health checks timed out, status queries hung, and any other analysis request queued behind it with no feedback. This devlog covers the six-phase hardening effort that replaced the entire R runtime infrastructure in a single day.

Database Consolidation: Eliminating the Docker Data Loss Risk

· 4 min read
Creator, Parthenon
AI Development Assistant

After losing app data to an accidental Docker volume wipe and spending 24 hours restoring it, we hardened the database architecture to eliminate this class of failure entirely. The Docker PostgreSQL container is no longer the source of truth for anything — the host PostgreSQL instance owns all persistent data, and automated backups run every 6 hours.