3.3. Prompt Engineering and Governance
💡 First Principle: Prompts are code — they determine model behavior, encode business rules, and carry security implications. Treating them as ad-hoc strings rather than governed artifacts leads to inconsistent outputs, security vulnerabilities, and unauditable AI behavior. Professional GenAI systems require prompt versioning, access control, and change management.
The stakes of ungoverned prompts: a system prompt that gets updated without review can change the FM's behavior across all users of a production application instantaneously. A prompt with insufficient safety instructions becomes an attack surface for prompt injection. A prompt without output format specification produces inconsistently parseable responses that break downstream systems.
⚠️ Think of prompts the same way you'd think of a function signature in code: the system prompt is the function definition (role, constraints, output format), user messages are the arguments, and the FM response is the return value. Like code, prompts need version control, testing, and access control.
| Governance Concern | Ungoverned Approach | Governed Approach |
|---|---|---|
| Versioning | Prompts edited ad-hoc in code | Bedrock Prompt Management with named versions |
| Change control | Anyone can modify | PR review + approval before production update |
| Auditability | No history of what prompt was active when | Model Invocation Logs capture prompt + response |
| Access | Prompts in plaintext environment variables | Bedrock Prompt resource with IAM permissions |
Common Misconception: System prompts are internal configuration that don't need security controls. System prompts are a primary attack surface — indirect prompt injection via retrieved documents can override them; they can be extracted by sufficiently crafted user inputs; and they encode business logic that may be confidential. Treat them with the same rigor as application code.