2.2.6. Prompt Libraries and Engineering Guidelines
Prompt engineering is not an ad-hoc skill — at enterprise scale, it requires systematic management. The exam explicitly tests "Provide guidelines for creating a prompt library" and "Provide prompt engineering guidelines for AI-powered business solutions." This subsection covers both.
Enterprise Prompt Library:
A prompt library is a governed repository of tested, versioned prompt templates that teams across the organization can reuse. Without a prompt library, every team writes prompts independently — leading to inconsistent quality, duplicated effort, and no shared learning.
Core Components:
| Component | Purpose |
|---|---|
| System prompts | Define agent persona, boundaries, and behavior |
| Task prompts | Templates for specific tasks (summarization, analysis, generation) |
| Few-shot examples | Reference examples that demonstrate desired output format |
| Guardrail prompts | Safety instructions that prevent harmful or off-topic responses |
| Evaluation prompts | Prompts used to test and benchmark agent quality |
Governance Requirements:
- Versioning — Track prompt changes with version history and rollback capability
- Approval workflow — Changes to production prompts require review and testing
- Access control — Role-based access to prompt templates (viewer, editor, approver)
- Usage tracking — Monitor which prompts are used, by which agents, with what results
- Deprecation process — Retire outdated prompts without breaking dependent agents
Prompt Engineering Guidelines for Business Solutions:
| Technique | Description | When to Use |
|---|---|---|
| Clear instructions | Explicit, specific directions about what to do and how to format output | Always — the foundation of every prompt |
| Role assignment | "You are a financial analyst..." | When the agent needs domain expertise framing |
| Output formatting | "Respond in JSON format with fields: ..." | When downstream systems consume the output |
| Chain-of-thought | "Think through this step by step..." | Complex reasoning, multi-factor decisions |
| Few-shot examples | Provide 2-3 input/output examples | When output format or style needs to be consistent |
| Constraints | "Do not include personal opinions" "Only use data from the provided documents" | Reducing hallucination, enforcing boundaries |
| Decomposition | Break complex tasks into sequential prompts | Tasks too complex for a single prompt |
Anti-patterns to Avoid:
- Vague instructions ("analyze this data") — Be specific about what to analyze and what format to use
- Missing context — Not providing enough background for the model to reason accurately
- Over-prompting — Stuffing so many instructions that the model loses focus on the primary task
- No output constraints — Allowing the model to generate unbounded responses
Exam Trap: The exam may present a prompt that produces inconsistent results and ask how to improve it. Look for missing constraints, unclear instructions, or lack of output formatting requirements. Adding few-shot examples is often the most effective fix for format consistency issues.
Reflection Question: A company's customer service agents use different system prompts across different Copilot Studio agents, leading to inconsistent tone and accuracy. What prompt library governance practice would you implement first?