Prompt and Policy Management as Production Assets
Overview
Prompts drift. Policies conflict. Production AI needs change control like any critical system.
Quick definition
Prompt policy management versions prompts as artifacts with semver, ties them to model IDs and temperature bounds, and requires review before promotion—like infrastructure config.
Definition
Prompt/policy management tracks versions, authors, environments (dev/stage/prod), and links changes to measured outcomes.
Why it matters
Ad-hoc prompt edits in production are undebuggable and unauditable.
Core framework
Step-by-step model as TypeScript interfaces (machine-readable checkpoints).
Git-like workflows
/**
* Git-like workflows
* PR reviews for prompt changes; CI evals on golden sets.
*/
export interface CoreFrameworkStep1GitLikeWorkflows {
/** Order in the core framework (0-based) */
readonly stepIndex: 0;
/** Display title for this step */
readonly title: "Git-like workflows";
/** Narrative checkpoints as published in the guide */
readonly narrative: readonly string[];
}
export const CoreFrameworkStep1GitLikeWorkflows_NARRATIVE: readonly string[] = [
"PR reviews for prompt changes; CI evals on golden sets."
] as const;Feature flags
/**
* Feature flags
* Gradual rollout of new templates by segment.
*/
export interface CoreFrameworkStep2FeatureFlags {
/** Order in the core framework (0-based) */
readonly stepIndex: 1;
/** Display title for this step */
readonly title: "Feature flags";
/** Narrative checkpoints as published in the guide */
readonly narrative: readonly string[];
}
export const CoreFrameworkStep2FeatureFlags_NARRATIVE: readonly string[] = [
"Gradual rollout of new templates by segment."
] as const;Detailed breakdown
Logic sections encoded as Python functions with structured narrative payloads.
Separation of concerns
def logic_block_1_separation_of_concerns(context: dict) -> dict:
"""Operational logic: Separation of concerns"""
# Narrative steps from the guide (logic section)
paragraphs = ["Business policy in config; linguistic style in prompt layers."]
return {
"heading": "Separation of concerns",
"paragraphs": paragraphs,
"context_keys": tuple(sorted(context.keys())),
}Technical patterns
Prompt registry
- `prompt_id@version` stored in git or config service; runtime resolves active.
- Eval harness scores each candidate on golden sets before prod.
Code examples
Resolve active prompt
Lookup with safe default.
export async function getPrompt(name) {
const rec = await registry.getActive(name);
if (!rec) throw new Error(`missing prompt ${name}`);
return rec.text;
}System architecture
[Authoring + PR review]
→ [Registry: versioned prompts]
→ [Eval CI gate]
→ [Runtime resolver]
→ [Telemetry: prompt version in traces]Real-world example
A support org reverted a harmful prompt in minutes using versioned templates—restoring CSAT.
Common mistakes
- Editing live prompts without tests.
- No ownership—everyone edits, no one accountable.
Related topics
PrimeAxiom implements governance for generative workflows—book a prompt ops review.