3.3.2. Advanced Prompting Techniques
💡 First Principle: Prompting technique selection should be driven by task complexity and the cognitive demand it places on the model. Zero-shot works when the task is clear and within the model's training distribution; few-shot adds examples to calibrate output format and style; chain-of-thought forces explicit reasoning steps for complex problems.
Technique selection guide:
| Technique | When to Use | Example Application | Token Cost |
|---|---|---|---|
| Zero-shot | Clear instructions, FM-familiar task | "Classify this email as positive/negative/neutral" | Low |
| Few-shot | Non-standard output format needed | "Given these 3 examples, extract entities in the same JSON format" | Medium |
| Chain-of-thought (CoT) | Multi-step reasoning, math, logic | "Think step by step: which investment option maximizes ROI?" | High |
| Self-consistency | High-stakes decisions | Run CoT 5x, take majority answer | Very High |
| Structured output | Downstream parsing required | Force JSON/XML response via output format in prompt + JSON Schema validation | Low |
Structured output enforcement with JSON Schema:
# Force structured output by specifying schema in system prompt
system_prompt = """
You must respond ONLY with valid JSON matching this schema. No other text.
Schema:
{
"type": "object",
"properties": {
"sentiment": {"type": "string", "enum": ["positive", "negative", "neutral"]},
"confidence_score": {"type": "number", "minimum": 0, "maximum": 1},
"key_topics": {"type": "array", "items": {"type": "string"}}
},
"required": ["sentiment", "confidence_score", "key_topics"]
}
"""
# Post-process: validate FM output against schema
import jsonschema
response_text = invoke_bedrock(prompt, system_prompt)
try:
parsed = json.loads(response_text)
jsonschema.validate(parsed, schema)
except (json.JSONDecodeError, jsonschema.ValidationError) as e:
# Retry with stronger formatting instructions
response_text = invoke_bedrock(prompt + "\n\nIMPORTANT: Return ONLY valid JSON.", system_prompt)
XML tags for Claude models: Anthropic's Claude models respond particularly well to XML-structured prompts:
<task>Analyze the following contract clause for risks</task>
<context>{{retrieved_documents}}</context>
<clause>{{user_provided_clause}}</clause>
<output_format>
Provide your analysis as:
<risk_level>low|medium|high</risk_level>
<issues>List each issue as a separate <issue> tag</issues>
<recommendation>Your recommended action</recommendation>
</output_format>
⚠️ Exam Trap: Adding more instructions to a prompt does not monotonically improve output quality. Beyond a certain density of instructions, models begin to drop or contradict earlier instructions. The solution is instruction hierarchy: put the most critical constraints in the system prompt, use structured format to enforce output schema, and validate programmatically rather than relying purely on prompt instructions.
Reflection Question: Your FM is supposed to extract structured data from invoices and return a JSON object, but 15% of responses include explanatory text before or after the JSON, breaking your downstream parser. What are two approaches to fix this — one at the prompt level and one at the application level?