Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

3.3.2. Advanced Prompting Techniques

💡 First Principle: Prompting technique selection should be driven by task complexity and the cognitive demand it places on the model. Zero-shot works when the task is clear and within the model's training distribution; few-shot adds examples to calibrate output format and style; chain-of-thought forces explicit reasoning steps for complex problems.

Technique selection guide:
TechniqueWhen to UseExample ApplicationToken Cost
Zero-shotClear instructions, FM-familiar task"Classify this email as positive/negative/neutral"Low
Few-shotNon-standard output format needed"Given these 3 examples, extract entities in the same JSON format"Medium
Chain-of-thought (CoT)Multi-step reasoning, math, logic"Think step by step: which investment option maximizes ROI?"High
Self-consistencyHigh-stakes decisionsRun CoT 5x, take majority answerVery High
Structured outputDownstream parsing requiredForce JSON/XML response via output format in prompt + JSON Schema validationLow
Structured output enforcement with JSON Schema:
# Force structured output by specifying schema in system prompt
system_prompt = """
You must respond ONLY with valid JSON matching this schema. No other text.

Schema:
{
  "type": "object",
  "properties": {
    "sentiment": {"type": "string", "enum": ["positive", "negative", "neutral"]},
    "confidence_score": {"type": "number", "minimum": 0, "maximum": 1},
    "key_topics": {"type": "array", "items": {"type": "string"}}
  },
  "required": ["sentiment", "confidence_score", "key_topics"]
}
"""

# Post-process: validate FM output against schema
import jsonschema
response_text = invoke_bedrock(prompt, system_prompt)
try:
    parsed = json.loads(response_text)
    jsonschema.validate(parsed, schema)
except (json.JSONDecodeError, jsonschema.ValidationError) as e:
    # Retry with stronger formatting instructions
    response_text = invoke_bedrock(prompt + "\n\nIMPORTANT: Return ONLY valid JSON.", system_prompt)

XML tags for Claude models: Anthropic's Claude models respond particularly well to XML-structured prompts:

<task>Analyze the following contract clause for risks</task>
<context>{{retrieved_documents}}</context>
<clause>{{user_provided_clause}}</clause>
<output_format>
Provide your analysis as:
<risk_level>low|medium|high</risk_level>
<issues>List each issue as a separate <issue> tag</issues>
<recommendation>Your recommended action</recommendation>
</output_format>

⚠️ Exam Trap: Adding more instructions to a prompt does not monotonically improve output quality. Beyond a certain density of instructions, models begin to drop or contradict earlier instructions. The solution is instruction hierarchy: put the most critical constraints in the system prompt, use structured format to enforce output schema, and validate programmatically rather than relying purely on prompt instructions.

Reflection Question: Your FM is supposed to extract structured data from invoices and return a JSON object, but 15% of responses include explanatory text before or after the JSON, breaking your downstream parser. What are two approaches to fix this — one at the prompt level and one at the application level?

Alvin Varughese
Written byAlvin Varughese
Founder15 professional certifications