3.2.2. Permissions, Controls, and Risk Mitigation
š” First Principle: Protecting against Copilot-related data risks requires the same controls that protect against any data access risk ā good permissions hygiene, sensitivity labels on sensitive content, DLP policies, and monitoring. Copilot doesn't require a new security model; it requires applying the existing model rigorously.
The control stack for Copilot security:
| Control Layer | Tool | What It Does |
|---|---|---|
| Identity | Microsoft Entra ID | Only authenticated, licensed users can access Copilot |
| Permissions | SharePoint, Exchange, OneDrive permissions | Controls what data Copilot can reach per user |
| Classification | Sensitivity labels (Purview) | Encrypts content; Copilot can't read encrypted content user can't decrypt |
| Data prevention | DLP policies | Can apply to Copilot interactions ā prevents sensitive data in prompts/responses |
| Monitoring | DSPM for AI (Purview) | Surfaces what AI tools are doing with data; identifies oversharing exposed by Copilot |
| Threat signals | Microsoft Defender | Detects anomalous Copilot usage patterns |
Pre-deployment checklist for Copilot:
- Review and restrict overshared SharePoint files (everyone-links, broad group access)
- Apply sensitivity labels to confidential and highly confidential content
- Configure DLP policies to include Copilot as a monitored location
- Enable DSPM for AI to track AI-related data activity
- Review and remove unnecessary licenses from users who don't need Copilot
ā ļø Exam Trap: Disabling Copilot features doesn't fix oversharing. If a sensitive file is accessible to 500 users because of broken permissions, disabling Copilot doesn't fix that problem ā it only prevents Copilot from surfacing it. The fix is fixing the permissions.
Reflection Question: Before deploying Copilot, your security team wants to reduce the risk of sensitive data exposure. They propose disabling Copilot for users with access to HR files. Is this the right approach? What should they do instead?