Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

2.4. Implement Content Safety

Building on the Responsible AI principles from Section 1.5, Content Safety provides runtime protection against harmful content. This connects directly to the fairness and reliability principles—ensuring your AI behaves safely for all users.

💡 First Principle: Content Safety is a tunable filter, not a binary switch. Think of severity levels like a volume knob: 0 is "definitely safe" and 6 is "definitely harmful." Your threshold determines where you draw the line—set it too low and you block legitimate content; too high and harmful content gets through. The exam tests whether you understand that adjusting thresholds (not removing categories) is how you fix over-blocking or under-blocking.

🔧 Implementation Reference: Content Safety
ItemValue
Packageazure-ai-contentsafety
ClassContentSafetyClient
Methodsanalyze_text(), analyze_image()
HeaderOcp-Apim-Subscription-Key
EndpointPOST /contentsafety/text:analyze

The four content categories and their severity ranges are shown below. Severity 0 is safe; severity 6 is severe.

Categories and Severity:
CategoryDescriptionSeverity Range
HateDiscrimination, slurs0 (safe) – 6 (severe)
ViolencePhysical harm0 – 6
SexualSexual content0 – 6
SelfHarmSelf-injury content0 – 6
Request Body:
{
    "text": "Content to analyze",
    "categories": ["Hate", "Violence", "Sexual", "SelfHarm"],
    "outputType": "FourSeverityLevels"
}
Error Handling Pattern:
from azure.ai.contentsafety import ContentSafetyClient
from azure.ai.contentsafety.models import AnalyzeTextOptions
from azure.core.exceptions import HttpResponseError

try:
    result = client.analyze_text(AnalyzeTextOptions(text=user_input))
except HttpResponseError as e:
    if e.status_code == 400:
        # Invalid request - check text length (max 10K characters)
        logging.error("Invalid request: text may exceed 10K character limit")
    elif e.status_code == 429:
        # Rate limited - implement exponential backoff
        time.sleep(int(e.response.headers.get("Retry-After", 60)))
    else:
        logging.error(f"Content Safety error: {e.status_code}")
CLI Equivalent (REST):
curl -X POST "https://{endpoint}/contentsafety/text:analyze?api-version=2024-02-15-preview" \
  -H "Ocp-Apim-Subscription-Key: {key}" \
  -H "Content-Type: application/json" \
  -d '{
    "text": "Content to analyze",
    "categories": ["Hate", "Violence", "Sexual", "SelfHarm"],
    "outputType": "FourSeverityLevels"
  }'

⚠️ Exam Trap: If legitimate content is incorrectly blocked, adjust severity thresholds, not categories.

The following diagram shows how to respond to Content Safety issues:

Content Safety Documentation

Alvin Varughese
Written byAlvin Varughese
Founder15 professional certifications