6.1.2. Translation Services
Translator converts text between languages. Supports document translation and custom terminology.
💡 First Principle: Unstructured text hides structured information—customer reviews contain sentiment, contracts contain entity names, articles contain key topics. Text Analysis extracts this structure, but different extractions answer different questions: "How do customers feel?" → sentiment. "What people/places/orgs are mentioned?" → NER. "What's this about?" → key phrases. The exam tests whether you can match the business question to the correct analysis type.
đź”§ Implementation Reference: Azure AI Language
| Item | Value |
|---|---|
| Package | azure-ai-textanalytics |
| Class | TextAnalyticsClient |
| Header | Ocp-Apim-Subscription-Key |
| Endpoint | POST /language/:analyze-text |
The following methods are the most commonly tested. Each takes text input and returns structured output.
Key Methods:
| Method | Output |
|---|---|
detect_language() | Language code + confidence |
analyze_sentiment() | Positive/Negative/Neutral + scores |
extract_key_phrases() | Important phrases |
recognize_entities() | Named entities (Person, Org, Location) |
recognize_pii_entities() | PII + redacted text |
Testable Pattern:
from azure.ai.textanalytics import TextAnalyticsClient
client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
result = client.analyze_sentiment(["The hotel was great but the food was terrible."], show_opinion_mining=True)[0]
print(result.sentiment) # "mixed"
CLI Equivalent (REST):
curl -X POST "https://{endpoint}/language/:analyze-text?api-version=2023-04-01" \
-H "Ocp-Apim-Subscription-Key: {key}" \
-H "Content-Type: application/json" \
-d '{"kind": "SentimentAnalysis", "analysisInput": {"documents": [{"id": "1", "text": "The hotel was great"}]}}'
Azure AI Language Documentation
Azure Translator:
| Item | Value |
|---|---|
| Endpoint | https://api.cognitive.microsofttranslator.com/translate |
| Headers | Ocp-Apim-Subscription-Key, Ocp-Apim-Subscription-Region |
⚠️ Exam Trap: Translator requires both key AND region headers.
BLEU scores measure translation quality. Know these ranges for exam questions about translation model evaluation.
BLEU Score (Translation Quality):
| Range | Quality |
|---|---|
| 0-19 | Low |
| 20-39 | Acceptable |
| 40-59 | High quality |
| 60+ | Very high |