Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

7.3. Scenario-Based Practice Questions

Question 1 — Plan Domain (Multi-Agent Strategy)

A global manufacturing company wants to deploy AI across customer service (chatbot for technical support), operations (demand forecasting), and HR (employee onboarding assistant). They have a mature Power Platform environment and D365 Finance/SCM. Their data is distributed across SharePoint, Dataverse, and an on-premises SQL Server.

Which approach best balances coverage, governance, and time-to-value?

A. Build all three agents from scratch in Microsoft Foundry with custom models B. Use prebuilt D365 capabilities for demand forecasting, Copilot Studio for the chatbot, and extend M365 Copilot with a declarative agent for HR C. Deploy a single multi-agent orchestration system that handles all three scenarios D. Outsource all AI development to a Microsoft partner and focus on data preparation

Answer: B. This approach uses the right tool for each scenario: D365 SCM's prebuilt demand forecasting (proven, no custom development needed), Copilot Studio for the customer-facing chatbot (conversational AI with topic design), and M365 Copilot extension for HR (employees already work in Teams/Outlook). Option A over-engineers with custom models. Option C forces unrelated scenarios into a single system. Option D abdicates architectural responsibility.

Question 2 — Design Domain (Copilot Studio)

A contact center receives 5,000 daily inquiries across chat and voice. 60% are about billing, 25% are about technical issues, and 15% are new requests. The billing department has strict response templates that must be followed verbatim. Technical support requires flexible reasoning to diagnose problems. New requests need to be routed to human agents.

Which combination of NLP approaches should the architect recommend?

A. Generative AI orchestration for all three categories B. CLU for billing (deterministic responses), generative AI for technical support (flexible reasoning), fallback topic for new requests C. Standard NLP for billing and technical, generative AI for new requests D. CLU for all three categories

Answer: B. Billing requires deterministic template-following — CLU provides the predictability needed for regulated responses. Technical support benefits from generative AI's ability to reason through novel diagnostic scenarios. New requests don't match existing topics, so the fallback topic catches them and routes to human agents. Option A risks non-deterministic billing responses. Option D can't handle flexible technical reasoning. Option C uses outdated NLP where better options exist.

Question 3 — Deploy Domain (Security)

An enterprise deploys a customer-facing agent grounded on product documentation stored in SharePoint. A security review finds that the agent occasionally references internal pricing strategies from a confidential document that was accidentally uploaded to the same SharePoint site. The document has been removed, but the team is concerned about similar future incidents.

Which architectural controls should the architect implement? (Choose 2)

A. Retrain the model to exclude confidential information B. Implement identity-aware retrieval so the agent queries SharePoint with the user's permissions C. Add output filtering to detect and block responses containing pricing-related content D. Disable the agent until a full security audit is complete

Answer: B and C. Identity-aware retrieval (B) ensures the agent only accesses documents the user is authorized to see — preventing exposure of confidential documents even if they're accidentally uploaded. Output filtering (C) adds a defense-in-depth layer that catches sensitive information even if retrieval controls fail. Option A is wrong — the issue isn't in the model's training but in the data it retrieves. Option D is an overreaction that doesn't fix the underlying vulnerability.

Question 4 — Deploy Domain (ALM)

A Copilot Studio agent is promoted from dev to production. In production, the agent's custom connector returns authentication errors, and one topic produces different responses than in dev. All other topics work correctly. Infrastructure monitoring shows no issues.

What are the two most likely causes?

A. The production environment has a different Copilot Studio version B. The connection reference wasn't updated with production credentials C. The topic's knowledge source points to the dev SharePoint site D. The model was retrained between promotion and deployment

Answer: B and C. Connection references (B) must be updated per environment — if dev credentials are carried to production, authentication fails. Knowledge source configuration (C) is environment-specific — a topic grounded on a dev SharePoint site will produce different results in production. Option A is unlikely (Copilot Studio is SaaS, same version across environments). Option D is irrelevant — solution promotion doesn't trigger model retraining.

Question 5 — Cross-Domain (Responsible AI + Design)

A financial institution wants to deploy an AI agent that pre-screens loan applications. The agent uses customer financial data, credit history, and employment information to generate a recommendation. Regulators require that every decision be explainable and that the system demonstrate fairness across demographic groups.

Which architectural requirements must the architect include?

A. Full decision lineage audit trail, bias testing across protected attributes, human-in-the-loop for final decisions, transparency disclosure to applicants B. Automated approval without human review (for speed), standard application logs, model accuracy testing C. Full decision lineage, bias testing, automated approval to reduce human bias D. Standard audit logs, manual bias review annually, human-in-the-loop for flagged cases only

Answer: A. Financial decisions require full decision lineage (regulators must see why each decision was made), bias testing across demographics (fairness principle), human-in-the-loop (reliability and safety — AI recommends, human decides for high-stakes financial decisions), and transparency disclosure (customers must know AI was involved). Option B skips accountability. Option C removes human oversight, which regulators require for financial decisions. Option D uses standard (non-AI) audit logs and infrequent bias review — insufficient for regulatory compliance.

Alvin Varughese
Written byAlvin Varughese
Founder15 professional certifications