1.1.2. How Agents Perceive, Reason, and Act
Every agent — regardless of its position on the autonomy spectrum — follows the same fundamental loop: perceive inputs, reason about what to do, and act on the decision. What changes across the spectrum is the sophistication of each stage.
This loop is the mental model that will serve you throughout the exam. When evaluating any agent design question, ask: What does it perceive? How does it reason? What actions can it take? What constrains those actions?
Perceive: Agents receive inputs through triggers. For Copilots, the trigger is always a user prompt. Task agents add system triggers — a new record in Dataverse, an email arriving, a scheduled time. Autonomous agents perceive continuously — monitoring data streams, event queues, and environmental signals. In Copilot Studio, you define what an agent perceives through its trigger configuration: conversational triggers (user messages matching topics), flow triggers (Dataverse events, scheduled runs), or external triggers via connectors.
Reason: This is where the AI model does its work. The agent's reasoning depends on three inputs:
- Instructions — the system prompt and behavioral guidelines you define when building the agent
- Context — grounding data from knowledge sources, Dataverse, SharePoint, or external systems
- Model capabilities — the underlying language model's ability to understand, analyze, and generate
In Copilot Studio, reasoning happens through generative orchestration — the platform uses an LLM to interpret user intent, select the appropriate topic or action, and formulate a response. For autonomous agents, reasoning also includes evaluating whether conditions warrant action at all.
Act: Actions range from generating a text response to executing complex multi-step workflows. In Copilot Studio, actions include:
- Responding with generated text or adaptive cards
- Calling Power Automate flows
- Invoking HTTP connectors to external systems
- Delegating to other agents (via A2A protocol)
- Using Computer Use to interact with application UIs
Guardrails constrain the loop. They define what the agent cannot or should not do — topics it must avoid, data it cannot access, actions requiring human approval, escalation thresholds. Guardrails are not optional safety features; they're core architectural components. An autonomous agent without guardrails is a liability, not an asset.
Exam Trap: The exam may present scenarios where an agent "takes incorrect action" and ask you to identify the root cause. Don't jump to "the model hallucinated." More often, the issue is in the perceive stage (wrong data grounding), the guardrails stage (missing constraints), or the instructions (ambiguous system prompt). The reasoning model is usually the last thing to blame.
Reflection Question: An autonomous agent monitors inventory levels but occasionally reorders items that are already in transit. Which stage of the perceive-reason-act loop most likely contains the flaw, and how would you fix it?