Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.
3.6. Enable Tracing and Collect Feedback
š” First Principle: Tracing and feedback answer different questions: tracing tells you what happened inside your app (which steps ran, what latency, what errors); feedback tells you whether users liked the result. You need both because a request can succeed technically (tracing shows no errors) but fail for the user (feedback shows thumbs-down). The exam tests whether you know that Azure AI Foundry provides both through Application Insights integration (tracing) and explicit feedback collection APIs.
š§ Implementation Reference: Tracing
| Item | Value |
|---|---|
| SDK | azure-ai-inference with OpenTelemetry |
| Tracing Backend | Azure Monitor / Application Insights |
| Key Metrics | Latency, token usage, error rates |
Enable Tracing:
from azure.monitor.opentelemetry import configure_azure_monitor
from opentelemetry import trace
# Configure tracing to Azure Monitor
configure_azure_monitor(connection_string="InstrumentationKey=...")
# Get tracer
tracer = trace.get_tracer(__name__)
# Trace a span
with tracer.start_as_current_span("chat_completion"):
response = client.chat.completions.create(...)
Collect Feedback:
# Store feedback with conversation ID
feedback_data = {
"conversation_id": conversation_id,
"rating": user_rating, # 1-5 or thumbs up/down
"comment": user_comment,
"response_id": response.id
}
# Log to Application Insights or custom store
Model Reflection enables the model to evaluate its own responses:
reflection_prompt = """
Review your previous response for:
1. Accuracy - Are all facts correct?
2. Completeness - Did you address all parts of the question?
3. Tone - Is it appropriate for the context?
Previous response: {response}
Provide a confidence score (0-100) and any corrections needed.
"""
Written byAlvin Varughese
Founderā¢15 professional certifications