6.2.1. Agent Security and Governance
💡 First Principle: An agent's security boundary is only as strong as its weakest access point. Agents interact with users through channels, access data through connectors, and invoke actions through integrations — each is an attack surface that must be independently secured.
Agent Security Layers:
| Layer | Threat | Control |
|---|---|---|
| Channel | Unauthorized access, impersonation | Channel authentication, user identity verification |
| Conversation | Social engineering, information extraction | Content filtering, sensitive data detection, conversation guardrails |
| Data access | Unauthorized data exposure through agent responses | Connector scoping, row-level security, data loss prevention policies |
| Action execution | Agent performing unauthorized operations | Action approval gates, least-privilege connector permissions, action logging |
| Administration | Unauthorized changes to agent configuration | Role-based access control for agent management, change auditing |
Data Loss Prevention (DLP) for Agents:
Power Platform DLP policies control which connectors can be used together in the same agent. This prevents architectural patterns where a customer-facing agent connector is grouped with an internal-only data connector — creating a path for data leakage. The architect designs DLP policies that balance security with agent functionality.
Governance Framework:
| Governance Area | What to Define | Who Owns It |
|---|---|---|
| Agent catalog | Which agents exist, their purpose, their data access | Platform team |
| Approval process | How new agents are approved for deployment | Security + business owner |
| Access reviews | Regular review of agent permissions and connector access | Security team |
| Retirement policy | How decommissioned agents are removed cleanly | Platform team |
| Incident response | What happens when an agent is compromised | Security + operations |
Troubleshooting Scenario: An organization's AI agent for employee self-service accidentally exposes salary data. The agent has proper Dataverse permissions at the app level, but its knowledge source includes an HR SharePoint site with compensation documents. The security gap: the agent inherits broad knowledge base access without applying user-level permissions at the grounding layer. The fix requires implementing identity-aware retrieval — the agent must query Dataverse and SharePoint in the context of the requesting user's permissions, not its own service account permissions.
AI agent security requires five distinct layers, each protecting a different attack surface: identity and access (who can use the agent), channel security (where the agent operates), data access controls (what data the agent can retrieve), action permissions (what the agent can do), and output filtering (what the agent can say). Missing any single layer creates an exploitable gap.
Reflection Question: A company deploys 15 agents across different departments, each with different data access needs. Two months later, security discovers that the HR agent's connector also has access to financial data because it was grouped with a shared Dataverse connector. Design the governance model that prevents this.