Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

5.3.3. Monitoring and Optimization Questions

Question 7

A Delta table in your lakehouse has degraded query performance over the past week. The table receives continuous streaming writes from an eventstream. Queries that took 10 seconds now take 5 minutes.

What should you do first?

  • A. Increase Spark executor memory
  • B. Run the OPTIMIZE command
  • C. Rebuild the table from scratch
  • D. Add more partitions
Answer: B. Run the OPTIMIZE command

Explanation: Streaming writes create many small files, which degrades query performance because Spark must open each file. OPTIMIZE consolidates small files into larger ones, dramatically improving performance. This is a common pattern and should be scheduled regularly for streaming tables.


Question 8

Your security team requires 2-year retention of all Fabric workspace activity logs for SOC 2 compliance. Monitor Hub shows only 30 days of history.

What should you configure?

  • A. Increase Monitor Hub retention settings
  • B. Export logs to Azure Log Analytics workspace
  • C. Create custom retention policies in Fabric Admin Portal
  • D. Use Power BI to archive Monitor Hub data
Answer: B. Export logs to Azure Log Analytics workspace

Explanation: Monitor Hub has limited retention (days to weeks). Azure Log Analytics integration enables configurable long-term retention (months to years). This is the enterprise solution for compliance requirements. Monitor Hub retention cannot be extended to years.


Question 9

A pipeline failed due to a script activity error. You need to ensure the pipeline execution fails with a customized error message and code.

Which two actions should you perform?

  • A. Add a Fail activity
  • B. Configure custom error settings in the Fail activity
  • C. Use a Try-Catch activity
  • D. Enable detailed logging
Answer: A and B

Explanation: The Fail activity is specifically designed to terminate pipelines with custom error messages and codes. You must add the activity and configure its error settings. There is no Try-Catch activity in Fabric pipelines. Logging doesn't address the requirement to fail with custom messages.


Question 10

You discover that a Dataflow Gen2 refresh fails after running for approximately one hour. The dataflow uses an on-premises data gateway.

What should you do first?

  • A. Verify the maximum parameters per pipeline
  • B. Verify the version of the on-premises data gateway
  • C. Check for queued refresh runs
  • D. Increase the refresh timeout setting
Answer: B. Verify the version of the on-premises data gateway

Explanation: Gateway version issues commonly cause long-running refresh failures. Outdated gateways have compatibility issues with Fabric. This should be checked first before investigating other causes. Pipeline parameters and queued runs are unrelated to Dataflow Gen2 gateway issues.

Alvin Varughese
Written byAlvin Varughese
Founder15 professional certifications