Benchmark MEDIUM relevance

Compiled AI: Deterministic Code Generation for LLM-Based Workflow Automation

Geert Trooskens (XY.AI Labs, Palo Alto, CA) Aaron Karlsberg (XY.AI Labs, Palo Alto, CA) Anmol Sharma (XY.AI Labs, Palo Alto, CA) Lamara De Brouwer (XY.AI Labs, Palo Alto, CA) Max Van Puyvelde (Stanford University School of Medicine, Stanford, CA) Matthew Young (XY.AI Labs, Palo Alto, CA) John Thickstun (Cornell University, Ithaca, NY) Gil Alterovitz (Brigham and Women's Hospital / Harvard Medical School, Boston, MA) Walter A. De Brouwer (Stanford University School of Medicine, Stanford, CA)
Published
April 6, 2026
Updated
April 6, 2026

Abstract

We study compiled AI, a paradigm in which large language models generate executable code artifacts during a compilation phase, after which workflows execute deterministically without further model invocation. This paradigm has antecedents in prior work on declarative pipeline optimization (DSPy) and hybrid neural-symbolic planning (LLM+P); our contribution is a systems-oriented study of its application to high-stakes enterprise workflows, with particular emphasis on healthcare settings where reliability and auditability are critical. By constraining generation to narrow business-logic functions embedded in validated templates, compiled AI trades runtime flexibility for predictability, auditability, cost efficiency, and reduced security exposure. We introduce (i) a system architecture for constrained LLM-based code generation, (ii) a four-stage generation-and-validation pipeline that converts probabilistic model output into production-ready code artifacts, and (iii) an evaluation framework measuring operational metrics including token amortization, determinism, reliability, security, and cost. We evaluate on two task types: function-calling (BFCL, n=400) and document intelligence (DocILE, n=5,680 invoices). On function-calling, compiled AI achieves 96% task completion with zero execution tokens, breaking even with runtime inference at approximately 17 transactions and reducing token consumption by 57x at 1,000 transactions. On document intelligence, our Code Factory variant matches Direct LLM on key field extraction (KILE: 80.0%) while achieving the highest line item recognition accuracy (LIR: 80.4%). Security evaluation across 135 test cases demonstrates 96.7% accuracy on prompt injection detection and 87.5% on static code safety analysis with zero false positives.

Metadata

Comment
14 pages, 2 figures, 3 tables

Pro Analysis

Full threat analysis, ATLAS technique mapping, compliance impact assessment (ISO 42001, EU AI Act), and actionable recommendations are available with a Pro subscription.

Threat Deep-Dive
ATLAS Mapping
Compliance Reports
Actionable Recommendations
Start 14-Day Free Trial