Skip to main content

Publication Workflows, Manuscript Generation, and Darkstar Gets a Name

· 5 min read
Creator, Parthenon
AI Development Assistant

A massive day on Parthenon with 193 commits landing across the platform. The headlining work: a near-complete publication/manuscript workflow that takes study analyses all the way to a formatted, auto-numbered document preview, plus a long-overdue rename of the R Analytics Runtime to Darkstar — the name it's been running under in Docker all along.

Publication Workflow: From Study Results to Manuscript

The most substantial feature push today was on the publish module, which is rapidly becoming a first-class citizen in the Parthenon platform. The goal is to let researchers go from completed study analyses directly to a publication-ready manuscript — without leaving the platform.

Manuscript Structure Overhaul

The section editor previously organized content around analysis types (cohort, characterization, PLP, etc.). That framing made sense from an engineering perspective but doesn't match how manuscripts are actually written. Today's refactor (b7411cd78) replaced that structure with a research-question-driven manuscript layout — Introduction, Methods, Results, Discussion — which is how journals and regulatory submissions expect content to be organized.

This is a subtle but important shift: the platform now speaks the language of the researcher, not the pipeline.

Element Toggles and Section Configurability

Two commits (2efc99095, 94bf9eb15) wired up the full toggle system between DocumentConfigurator and SectionEditor. Each section can now independently show or hide tables, narrative text, and diagrams. The configurator acts as the source of truth, propagating toggle state down to the section editors — a clean unidirectional data flow that should make this easy to extend as more element types are added.

ResultsTable Component

A new ResultsTable component (c2406012b) handles publication-style rendering of analysis results — think formatted cells, appropriate significant figures, and layout that maps to what you'd see in a journal table. Crucially, tables and figures in the preview are now auto-numbered (8a85a80e6), so Table 1, Table 2, Figure 1, etc. update dynamically as sections are toggled on or off. Anyone who's manually renumbered tables in a Word document at midnight before a submission deadline knows why this matters.

Analysis Picker Improvements

The analysis picker (c2406012b) gained two quality-of-life improvements: a Select All per study checkbox, and automatic pre-selection of the studyId when navigating to the publish page from a specific study. The latter pairs with a new Generate Manuscript button added to the Studies page (f208b2e52) — one click takes you to the publish workflow with your study already in context.

Narrative Generation and Bug Fixes

Two fix commits (dc4d19e05, 3b4f21103) addressed real issues surfacing during end-to-end testing of the publish workflow:

  • Study analyses now load with their associated executions, which is required for the publish workflow to have the data it needs to generate content.
  • Narrative generation is now properly wired end-to-end, 95% confidence intervals are included in result summaries, unlisted analysis types are handled gracefully, and several test failures introduced during the refactor were resolved.

These aren't glamorous fixes, but they're the difference between a feature that demos well and one that actually works.


Darkstar: The R Analytics Runtime Gets Its Name

The R Analytics Runtime has been called "Darkstar" in Docker configurations for a while, but the System Health admin UI and backend were still referring to it as r or "R Analytics Runtime." Today's work (b3a265ecb and associated devlog) brought everything into alignment.

Backend and API

SystemHealthController.php now uses the service key darkstar (matching the Docker service name) and the display name "Darkstar." The health card message is more informative too — instead of a generic status, it now shows something like "R 4.4.2, 20 HADES packages loaded" at a glance.

The getDarkstarMetrics() method replaces the old getRMetrics() and returns structured package version groups alongside runtime diagnostics (memory usage, JVM status, JDBC connectivity). On the R side, darkstar/api/health.R bumped to version 0.3.0 and now enumerates 20 OHDSI HADES packages and 12 Posit/CRAN infrastructure packages using utils::packageVersion() with per-package error handling — so a missing package surfaces cleanly rather than crashing the health endpoint.

Frontend: DarkstarPackagesPanel

The ServiceDetailPage.tsx component gained a new DarkstarPackagesPanel that renders both package groups as 4-column grids showing package name and installed version. The panel is intentionally excluded from the generic nested metrics renderer to avoid double-rendering, while flat metrics (R version, uptime, memory, JVM/JDBC) continue to display in the standard Metrics section.

For anyone debugging environment drift between deployments — "why is CohortMethod 5.2.1 on prod but 5.3.0 on staging?" — having this surfaced directly in the admin UI is a meaningful operational improvement.

OHDSI HADES packages tracked: SqlRender, DatabaseConnector, Andromeda, Cyclops, FeatureExtraction, ResultModelManager, EmpiricalCalibration, ParallelLogger, CohortMethod, PatientLevelPrediction, SelfControlledCaseSeries, EvidenceSynthesis, CohortGenerator, CohortDiagnostics, DeepPatientLevelPrediction, CohortIncidence, Characterization, Strategus, and more.


What's Next

The publish workflow is close to a functional end-to-end demo — the remaining gaps are around export (PDF/DOCX rendering) and integrating narrative generation with live analysis results rather than mocked data. That's the next frontier.

On the Darkstar side, the package version display is a foundation for something more useful: version pinning, environment validation, and potentially automated alerts when package versions drift from a known-good baseline. The data is now there; the tooling around it can follow.

It was a good day to be building outcomes research infrastructure.