Copyright (c) 2026 MindMesh Academy. All rights reserved. This content is proprietary and may not be reproduced or distributed without permission.

8. Glossary

A2A (Agent2Agent Protocol) — An open standard enabling agent-to-agent communication and task delegation. Distinct from MCP, which connects agents to tools. See Section 1.2.2.

Agent Activity Feed — A monitoring interface in Dynamics 365 Contact Center that shows supervisors real-time visibility into autonomous agent actions, decisions, and escalations. See Section 5.1.1.

Agent Flow — Copilot Studio's mechanism for orchestrating agent-level conversation logic (reasoning, topic routing, multi-step interactions). Distinct from Power Automate flows, which handle backend workflow automation. See Section 3.2.2.

Agentic AI — AI systems that can perceive events, make decisions, and execute tasks autonomously, ranging from simple reactive agents to fully autonomous systems. See Section 1.1.1.

AI Builder — Power Platform component that provides prebuilt and custom AI models for document processing, text classification, object detection, and prediction within Power Apps and Power Automate. See Section 4.4.1.

AI Center of Excellence (AI CoE) — A cross-functional organizational body that provides governance, best practices, skills development, and standards for enterprise AI adoption. See Section 2.2.2.

ALM (Application Lifecycle Management) — The process of managing an application through development, testing, deployment, and retirement. AI ALM extends this with model versioning, prompt management, and data lineage. See Section 6.1.

Business Terms — Copilot configuration in Dynamics 365 that maps natural language vocabulary to Dataverse fields and values, enabling Copilot to understand domain-specific language without schema changes. See Section 4.1.1.

CLU (Conversational Language Understanding) — Azure AI service that provides trained intent recognition with entity extraction. Used in Copilot Studio for scenarios requiring deterministic, predictable intent routing. See Section 3.2.4.

Cloud Adoption Framework (CAF) — Microsoft's methodology for cloud and AI adoption covering strategy, planning, governance, skills readiness, and organizational change management. See Section 2.2.1.

Computer Use — Copilot Studio capability that enables agents to interact with application UIs directly (clicking, typing, navigating) for scenarios where no API is available. See Section 3.3.2.

Connection Reference — Power Platform mechanism that separates connector definition from authentication credentials, enabling environment-specific configurations during solution promotion. See Section 6.1.1.

Containment Rate — The percentage of conversations fully resolved by an AI agent without any human involvement. Key metric for contact center ROI. See Section 5.1.1.

Data Residency — Legal and regulatory requirements governing where data can be stored, processed, and transferred. For AI solutions, extends to model training data jurisdiction and inference processing location. See Section 6.3.2.

Decision Lineage — A complete audit trail from user input through model processing to output, including retrieved context, model version, and prompt template. Required for regulated AI decisions. See Section 6.3.3.

Declarative Agent — M365 Copilot extensibility mechanism that adds custom instructions, tone, and knowledge scope through configuration (no code). See Section 4.5.3.

DLP (Data Loss Prevention) — Power Platform policies that control which connectors can be used together, preventing data leakage through connector grouping. See Section 6.2.1.

Drift Detection — Monitoring process that identifies when an AI model's accuracy degrades over time due to changes in input data distribution or real-world conditions. See Section 5.1.2.

Fallback Topic — A Copilot Studio topic that activates when no other topic matches the user's intent. Design of fallback behavior significantly affects user experience. See Section 3.2.1.

Fine-Tuning — The process of further training a pre-existing AI model on domain-specific data to improve its performance on specific tasks. Requires more effort than prompt engineering or RAG. See Section 4.3.1.

Foundry Models — Microsoft Foundry's model catalog containing thousands of AI models from Microsoft, OpenAI, Anthropic, Meta, Mistral, and other providers for deployment and fine-tuning. See Section 4.3.1.

Foundry Tools — Microsoft Foundry's pre-built AI services including Document Intelligence, Speech, Vision, Language, and Content Safety. See Section 4.3.

Generative AI Orchestration — Copilot Studio's LLM-based approach to understanding user intent and routing conversations, offering dynamic and context-aware understanding. See Section 3.2.4.

Graph Connector — M365 extensibility mechanism that ingests external data into Microsoft Graph's search index, making it searchable by Copilot. Pre-indexes data for search. See Section 4.5.3.

Grounding — The process of providing an AI model with relevant context data (through RAG, knowledge sources, or other mechanisms) to produce accurate, factual responses. See Section 1.3.2.

Handoff — The process of transferring a conversation from an AI agent to a human service representative, including context transfer, skills-based routing, and sentiment-aware escalation. See Section 4.1.2.

Identity-Aware Retrieval — A security pattern where an AI agent queries data sources using the end user's identity and permissions, ensuring the agent only surfaces data the user is authorized to see. See Section 6.2.3.

MCP (Model Context Protocol) — An open standard for connecting agents to tools and data sources (agent-to-tool connectivity). Distinct from A2A, which handles agent-to-agent communication. See Section 1.2.2.

Model Router — An intelligent routing system that selects the most suitable AI model per request based on task complexity, cost, latency, and capability requirements. See Section 2.3.3.

Multi-Agent Orchestration — Architecture pattern where multiple agents with distinct roles collaborate through defined patterns (sequential, parallel, hierarchical) with shared context. See Section 1.3.1.

Perceive-Reason-Act Loop — The fundamental operating cycle of agentic AI: perceive inputs/events, reason about appropriate actions, act on decisions. See Section 1.1.2.

Prompt Action — Copilot Studio component that sends dynamic prompts to AI models during conversation flow, enabling flexible AI-generated responses within structured conversations. See Section 3.2.3.

Prompt Injection — An attack where adversarial instructions are embedded in user inputs (direct) or retrieved data (indirect) to manipulate AI model behavior. See Section 6.2.2.

Prompt Library — An enterprise-governed collection of prompt templates with versioning, role-based access, testing standards, and quality controls. See Section 2.2.6.

RAG (Retrieval-Augmented Generation) — Architecture pattern where an AI model retrieves relevant documents/data before generating responses, improving accuracy and grounding. See Section 1.3.2.

Resolution Rate — The percentage of agent conversations that reach successful resolution without human handoff. Primary measure of agent effectiveness. See Section 5.1.1.

Responsible AI — Microsoft's framework of six principles (fairness, reliability/safety, privacy/security, inclusiveness, transparency, accountability) applied throughout the AI lifecycle. See Section 6.3.1.

SLM (Small Language Model) — Language models optimized for specific tasks with lower parameter counts, offering lower latency, reduced cost, and smaller infrastructure requirements than LLMs. See Section 2.2.5.

Task Agent — An agent type that executes defined, multi-step operations when triggered by a user or system event. Scoped, predictable, and governable. See Section 3.1.1.

TCO (Total Cost of Ownership) — The complete cost of an AI solution including model hosting, data pipeline maintenance, monitoring, retraining, support, and opportunity cost. See Section 2.3.1.

Topic — A Copilot Studio conversation unit that groups trigger phrases, conversation flow logic, and responses for a specific user intent. See Section 3.2.1.

Transfer Question — A practice question that tests concepts in scenarios not found in the study guide, requiring the learner to apply principles to novel contexts. See Exam Question Bank Creator skill.

Well-Architected Framework — Microsoft's guidance for designing reliable, secure, and performant workloads. For intelligent workloads, adds AI-specific concerns to each of five pillars. See Section 4.4.2.

Alvin Varughese
Written byAlvin Varughese
Founder15 professional certifications