4.1.4. Workspace Logging with Azure Log Analytics
💡 First Principle: Monitor Hub shows Fabric-native activity logs, but enterprise observability often requires integration with Azure's broader monitoring ecosystem. Workspace logging exports detailed telemetry to Azure Log Analytics for advanced querying, long-term retention, and cross-service correlation.
Scenario: Your organization uses Azure Monitor for all cloud infrastructure. The security team requires centralized logging of all data access patterns across Azure services, including Fabric. Monitor Hub alone cannot provide this integration.
Key Differences: Monitor Hub vs. Workspace Logging
| Aspect | Monitor Hub | Workspace Logging (Log Analytics) |
|---|---|---|
| Retention | Limited (days/weeks) | Configurable (months/years) |
| Query Language | Basic filtering | KQL (full power) |
| Integration | Fabric-only | Azure Monitor, Sentinel, third-party SIEM |
| Alerting | Basic | Advanced with Azure Monitor Alerts |
| Cost | Included | Log Analytics ingestion costs |
Configuring Workspace Logging
- Prerequisites: Create an Azure Log Analytics workspace in Azure Portal
- Navigate to Fabric Admin Portal → Audit and usage settings
- Enable Azure Log Analytics integration
- Provide the Log Analytics workspace ID and key
- Select which log categories to export
Log Categories Available:
| Category | Data Included | Use Case |
|---|---|---|
| Audit Logs | User actions, permission changes | Security compliance |
| Operation Logs | Pipeline runs, refresh activities | Operational monitoring |
| Performance Logs | Query durations, resource consumption | Performance analysis |
Querying Fabric Logs in Log Analytics
// Find all failed pipeline runs in the last 24 hours
FabricLogs
| where TimeGenerated > ago(24h)
| where Category == "PipelineRuns"
| where Status == "Failed"
| project TimeGenerated, WorkspaceName, PipelineName, ErrorMessage
| order by TimeGenerated desc
// Analyze refresh patterns by user
FabricLogs
| where Category == "Refresh"
| summarize RefreshCount = count() by User, bin(TimeGenerated, 1d)
| render timechart
Visual: Logging Architecture
⚠️ Exam Trap: Enabling all log categories without considering cost can be expensive. Log Analytics charges based on data ingestion volume. Start with audit and operation logs, then add performance logs only if needed for specific analysis.
Key Trade-Offs:
- Comprehensive Logging vs. Cost: More log categories increase visibility but also Log Analytics costs
- Centralization vs. Complexity: Centralized logging enables correlation but requires Log Analytics expertise
Reflection Question: Your security team requires 2-year retention of all data access logs for compliance. Can Monitor Hub alone meet this requirement? What configuration would you recommend?